Wednesday, 27 November 2019

Responsible AI: governing market failure

If society seeks or needs responsible development and use of AI technologies, how is this best achieved?

This month the Australian government published its analysis of public submissions on its April 2019 proposed 'Ethical AI Principles', and published a revised set of principles: here. 

In my April submission (in this repository) among other things I put three points, which I summarise here as I believe they remain 'live':

1. A national conversation

The first point was about processes, such as the public enquiry, of arriving at and promoting such lists of principles (whatever their content). This process or that of the Australian Human Rights Commission are no substitute for a genuine, scaled national conversation, indeed a global one. As I submitted, that conversation is not about 'what should our ethical AI principles look like' but (if AI is truly as transformative as we think) about the more fundamental question 'how should we live [and what role do we want and not want for technology in that attempt at flourishing]'.

2. The missing governance piece

The second point was to ask how the listed principles are intended to take or be given effect, which is a question not of ‘principles for ethical AI’ but of ‘the governance of principles for ethical AI’. Every major government and tech company has or is producing such lists. What are the mechanisms by which, in various contexts, we think they are best given effect? Since they are 'ethical' principles, I hesitate to say 'how are they complied with' and 'what are the consequences of non-compliance'. Which leads to my third point.

3. Ethics vs law / regulation

The third point was to suggest that the real question (in seeking submissions) ought not to be whether the 8 listed principles in the Australian framework are the ‘right’ or best or most complete ethical principles. Some ethical AI frameworks have more (e.g. Future of Life's 23), some have less (e.g. the OECD's 5, or Google's 7). The prior question ought to be whether responsible AI development and use is best approached as a question of ethics rather than as a question of law and regulation.

I reflected on this third issue in a previous post (here): there is a very live law and regulation aspect here (as useful as ethics-based approaches are, and complementary to law).

This month's revised approach notes:
  • "The framework may need to be supplemented with regulations, depending on the risks for different AI applications. New regulations should only be implemented if there are clear regulatory gaps and a failure of the market to address those gaps."

This is, on one view, a remarkable proposition, if not an outright abdication of governmental responsibility for promoting responsible AI. 

It is a proposition, unless I am mistaken, that in relation to AI -- which the Australian framework process explicitly states is so fast-evolving, so profoundly transformative, so pervasive -- posits that:

(a) law and regulation is only a 'supplement' to ethics-based approaches; and
(b) the market [whatever that means!] should be left to address 'compliance' with ethical principles, and the people's elected law-making bodies should only have a role where gaps [whatever that means!] are 'clear' .

For one thing, by the time we diagnose that there has been a market failure to encourage or enforce responsible AI development and use, it will be rather too late to start asking law-makers to get out their legislative drafting pens and address 'gaps'.

Lawyers and law-makers can stand down: we are not needed here, or now. Australia, that sophisticated regulatory state, has decided that the market -- which of course has proven soooo socially responsible hitherto -- can regulate this issue just fine.

Jo 

No comments:

Post a Comment