Wednesday, 22 May 2019

'Ethical AI', business, and human rights

How and where does a human rights approach fit into current conversations about 'ethical Artificial Intelligence'?

I'm preparing my submission, due 31 May, to the Australian government's enquiry paper on ethical AI.

Naturally as a 'business and human rights' scholar I am among other things curious about the focus on ethical framings for these questions and issues, relative to legal and regulatory ones (including by reference to human rights concepts and law).

We're currently experiencing a cascade of words as various governmental, inter-governmental, corporate and professional bodies produce ethical frameworks. The Australian discussion paper suggests 8 core principles (fairness, accountability, explainability, etc); the recent European Commission one suggests 7 principles; Google advances 7, Microsoft 6, and so on -- all unobjectionable but inherently ambiguous, context-contingent terms / values / concepts. [See here for one recent inventory -- an attempt to list all these lists of ethical AI principles ... ]

This cascade of normative frameworks is accompanied by a tilt towards a greater focus on governmental action: a regulatory consciousness on ethical AI has been late coming, but is afoot (see here, for example: 'US to back international guidelines...'). Tech giants are calling for rather than necessarily resisting regulation.

While the gist of my upcoming May submission is that this subject-matter is about more than ethics in these sense that there's a law and regulation piece here (as useful as ethics-based approaches are, and complementary to law).

Yet in our chagrin as lawyers at the belated recognition that our discipline matters here, there is something more. These issues may be 'bigger' than ethics, but they are also bigger than and beyond just a conventional debate on law and governance. Certainly, human rights law is not necessarily and ideal vehicle for conducting and framing that debate.

What is involved around responsible innovation debates is really asking some fundamental questions about the future shape of human society. While necessary to this debate, law and especially human rights law are limited as a vernacular for having those debates.

In a seminar on May 8 I quoted Harari (2018) who rightly notes that we need a shared and coherent 'story' of what these technologies are for, how they do or do not advance a society of the sort that we want and recognise as 'good' and 'just':

".... We cannot continue this debate indefinitely … [v]ery soon someone will have to decide how to use this power [AI, etc] – based on some implicit or explicit story about the meaning of life … engineers are far less patient, and investors are the least patient of all. If you do not know what to do with the power [of these technologies, but also the power of how to govern them], market forces will not wait a thousand years for … an answer. The invisible hand of the market will force upon you its own blind reply..."

Jo

See previous posts on responsible innovation here.

No comments:

Post a Comment