Wednesday 27 November 2019

Responsible AI: governing market failure

If society seeks or needs responsible development and use of AI technologies, how is this best achieved?

This month the Australian government published its analysis of public submissions on its April 2019 proposed 'Ethical AI Principles', and published a revised set of principles: here. 

In my April submission (in this repository) among other things I put three points, which I summarise here as I believe they remain 'live':

1. A national conversation

The first point was about processes, such as the public enquiry, of arriving at and promoting such lists of principles (whatever their content). This process or that of the Australian Human Rights Commission are no substitute for a genuine, scaled national conversation, indeed a global one. As I submitted, that conversation is not about 'what should our ethical AI principles look like' but (if AI is truly as transformative as we think) about the more fundamental question 'how should we live [and what role do we want and not want for technology in that attempt at flourishing]'.

2. The missing governance piece

The second point was to ask how the listed principles are intended to take or be given effect, which is a question not of ‘principles for ethical AI’ but of ‘the governance of principles for ethical AI’. Every major government and tech company has or is producing such lists. What are the mechanisms by which, in various contexts, we think they are best given effect? Since they are 'ethical' principles, I hesitate to say 'how are they complied with' and 'what are the consequences of non-compliance'. Which leads to my third point.

3. Ethics vs law / regulation

The third point was to suggest that the real question (in seeking submissions) ought not to be whether the 8 listed principles in the Australian framework are the ‘right’ or best or most complete ethical principles. Some ethical AI frameworks have more (e.g. Future of Life's 23), some have less (e.g. the OECD's 5, or Google's 7). The prior question ought to be whether responsible AI development and use is best approached as a question of ethics rather than as a question of law and regulation.

I reflected on this third issue in a previous post (here): there is a very live law and regulation aspect here (as useful as ethics-based approaches are, and complementary to law).

This month's revised approach notes:
  • "The framework may need to be supplemented with regulations, depending on the risks for different AI applications. New regulations should only be implemented if there are clear regulatory gaps and a failure of the market to address those gaps."

This is, on one view, a remarkable proposition, if not an outright abdication of governmental responsibility for promoting responsible AI. 

It is a proposition, unless I am mistaken, that in relation to AI -- which the Australian framework process explicitly states is so fast-evolving, so profoundly transformative, so pervasive -- posits that:

(a) law and regulation is only a 'supplement' to ethics-based approaches; and
(b) the market [whatever that means!] should be left to address 'compliance' with ethical principles, and the people's elected law-making bodies should only have a role where gaps [whatever that means!] are 'clear' .

For one thing, by the time we diagnose that there has been a market failure to encourage or enforce responsible AI development and use, it will be rather too late to start asking law-makers to get out their legislative drafting pens and address 'gaps'.

Lawyers and law-makers can stand down: we are not needed here, or now. Australia, that sophisticated regulatory state, has decided that the market -- which of course has proven soooo socially responsible hitherto -- can regulate this issue just fine.

Jo 

Thursday 25 July 2019

Modern Slavery reporting laws: a study

One way in which the 'business and human rights' agenda is manifesting in national-level laws is through legislation -- most recently in Australia -- to require larger firms to report periodically on risks of 'modern slavery' within their operations and supply chains.

We have produced a report on how Australian firms appear to be preparing to reporting under the 2018 Modern Slavery Act.

Is the Act at risk of become a mere tick-box exercise, or will it help drive a more fundamental transformation of approaches to human rights risks in supply chains:

"... Despite mixed levels of awareness, a common refrain in interviews was that the reporting requirement was a ‘conversation starter’ (including, importantly, within firms) even if not a ‘conversation changer’, although for some it had achieved the latter..."

Here is a link to the report (with M. Azizul Islam and Justine Nolan).

Jo

Ps: for some recent blogs on reporting laws on Modern Slavery, see this post (and links within).

Wednesday 22 May 2019

'Ethical AI', business, and human rights

How and where does a human rights approach fit into current conversations about 'ethical Artificial Intelligence'?

I'm preparing my submission, due 31 May, to the Australian government's enquiry paper on ethical AI.

Naturally as a 'business and human rights' scholar I am among other things curious about the focus on ethical framings for these questions and issues, relative to legal and regulatory ones (including by reference to human rights concepts and law).

We're currently experiencing a cascade of words as various governmental, inter-governmental, corporate and professional bodies produce ethical frameworks. The Australian discussion paper suggests 8 core principles (fairness, accountability, explainability, etc); the recent European Commission one suggests 7 principles; Google advances 7, Microsoft 6, and so on -- all unobjectionable but inherently ambiguous, context-contingent terms / values / concepts. [See here for one recent inventory -- an attempt to list all these lists of ethical AI principles ... ]

This cascade of normative frameworks is accompanied by a tilt towards a greater focus on governmental action: a regulatory consciousness on ethical AI has been late coming, but is afoot (see here, for example: 'US to back international guidelines...'). Tech giants are calling for rather than necessarily resisting regulation.

While the gist of my upcoming May submission is that this subject-matter is about more than ethics in these sense that there's a law and regulation piece here (as useful as ethics-based approaches are, and complementary to law).

Yet in our chagrin as lawyers at the belated recognition that our discipline matters here, there is something more. These issues may be 'bigger' than ethics, but they are also bigger than and beyond just a conventional debate on law and governance. Certainly, human rights law is not necessarily and ideal vehicle for conducting and framing that debate.

What is involved around responsible innovation debates is really asking some fundamental questions about the future shape of human society. While necessary to this debate, law and especially human rights law are limited as a vernacular for having those debates.

In a seminar on May 8 I quoted Harari (2018) who rightly notes that we need a shared and coherent 'story' of what these technologies are for, how they do or do not advance a society of the sort that we want and recognise as 'good' and 'just':

".... We cannot continue this debate indefinitely … [v]ery soon someone will have to decide how to use this power [AI, etc] – based on some implicit or explicit story about the meaning of life … engineers are far less patient, and investors are the least patient of all. If you do not know what to do with the power [of these technologies, but also the power of how to govern them], market forces will not wait a thousand years for … an answer. The invisible hand of the market will force upon you its own blind reply..."

Jo

See previous posts on responsible innovation here.

Tuesday 2 April 2019

Modern slavery reporting: what it is/not

Some legislative schemes have unforeseen consequences on the upside. They achieve far more than their particular remit, and capture or catalyse a wider shift.

Others generate unreasonable expectations: laws can only do so much, even in developed regulatory states, and especially without the accompaniment of more profound and clear messages from markets and people about the kinds of behavioural and cultural changes they want companies to exhibit.

Australia's Modern Slavery Act came into effect this year. It requires larger firms report annually on whether and what steps they are taking to manage these human rights risks within their operations and supply chains.

Last week the government released draft guidance for reporting entities (here).

In this context I attempt 3 propositions about what this statutory scheme may represent, and 3 things that it doesn't necessarily represent: what is it not?

'What it maybe is'

1. The Act is -- like its UK predecessor and whatever its shortcomings -- a landmark achievement in bringing to corporate boards across Australia a new awareness of the human rights risks sometimes associated with mainstream business and financial activity, and the extent of regulatory intent that exists around these.

2. The Australian Act can be seen as part of a wider pattern, at least in OECD countries, of statutory requirements to undertake human rights due diligence or at least report on such activities -- even if domestic and other regulatory manifestations of the UN Guiding Principles on Business and Human Rights are still very piecemeal.

3. Engagement with the Act by corporations (and their advisory firms) on this particular human rights risk may drive wider awareness and uptake of the fuller 'business and human rights' agenda, but will not necessarily do so.*

'What it perhaps is not'

1. The Act is one corporate reporting mechanism among many for big firms (and modern slavery is only one class of business & human rights issue): as the Act beds down there are no guarantees that this will remain a distinctive high-profile issue ...

2. An external reporting requirement (even one with board-level sign-off) is no guarantee that firms and funds will take and sustain effective internal procedural, operational and cultural changes relevant to preventing and remedying modern slavery risk.

3. Even fulsome corporate compliance with the Act (and internalisation of its purpose) will not necessarily have a discernible or material effect on the prevalence of modern slavery in our region.

That last point is a reminder of the risk of such legislation becoming an Australian regulatory salve for our own consciences (here).

There are many other things one could add about what the Act is not. The Act is not a panacea. The Act is not mere pandering to corporations. The Act is not victim-focused or remedial in nature.

And so on. For one thing, advocates and academics in 'business and human rights' often talk of these corporate reporting models as new. They are not. They are only new to this field, which would benefit from more research couched in lessons about what reporting requirements, and non-penal ones in particular, can and cannot do to drive progress on the underlying issue with which they deal. 

Jo

* In 2018 research with Justine Nolan and M. Azizul Islam (publication forthcoming) we observe that many corporate officers and others see the Act as, at very least, a 'conversation starter' within firms and more widely.

See here for a recent post on compliance risk under the Act, and here for a different take (modern slavery approached in verse...).

Thursday 14 March 2019

Business, human rights and responsible innovation

We are increasingly governed and influenced by algorithms and predictive analysis.

The use by governments and businesses of artificial intelligence / machine learning (AI/ML) platforms can impact on human rights in myriad ways.

We have moved from debating whether governments need to regulate AI's potential discriminatory (etc.) effects, to questions of how best to do so in a legitimate, effective and coherent way: enabling innovation while protecting fundamental values and interests.

The nexus of 'new tech' and 'human rights' is presented as an emerging issue. Yet the rate of change and the implications of AI (etc.) across so many aspects of life suggest that it is only a regulatory consciousness that is still 'emerging'. All else is well underway.

Yes, we are far from the shallows now (as Lady Gaga / Bradley Cooper sing in A Star is Born (2018)): we are well in the deep waters now of how best to regulate for responsible innovation. And those deep waters are fast-moving ones, far faster than most regulatory and legal systems have moved.

This post relates to my hasty and under-cooked submission last week to the Australian Human Rights Commission / WEF 'White Paper' on 'AI and Human Rights: Leadership and Governance', itself related to a wider consultation (2018, ongoing).

One point made in that submission was a reflection on big tech firms' approach to the regulatory question. (This post is confined to that reflection -- the responsible innovation regulatory agenda is a far bigger and more complex one.)

The Commission's reports detail how influential CEOs -- from Microsoft to Amazon to Facebook -- are all now calling for or conceding the need for governmental regulatory frameworks on ethical AI / social impact / human rights (and these are not all the same thing, as my submission notes!).

These CEOs thus recognise the shift to the 'how' question, and are partly behind that shift, calling for regulation. Salesforce's CEO said at Davos last year that the role of governments and regulators was to come in and "point to True North".

Now most commentators have welcomed this. Like the Commission, they add this CEO's call to the chorus ('at least they are not resisting regulation' and 'business is inviting government to lead and steer'. A good thing).

Yet is it only me who finds something hugely troubling about this statement?

It is this. Is big tech so lacking in moral substance that it needs government to point out 'True North' (a set of general principles to guide AI design and use)? 'True North' is by definition universal and fairly easy to establish. Non-discrimination, user privacy, access to review and reasons for adverse decisions. These were basic societal values last time I looked at western democracies. They do not require governmental steer or compass reading for business. Get on with it, already.

Governments must lead the responsible innovation agenda, not least because their own use of AI is a key issue. Yet on the Salesforce CEO's statement, if industry cannot arrive at these values of its own accord, we truly are far from the shallows. As Lady Gaga sings, how will we remember ourselves this way -- before AI made life unrecognisable? 

Jo

Ps -- see an earlier blog here on 'big data' and human rights, and this one from November last year putting some of these themes into a short poem... !?

Tuesday 5 February 2019

Corporate culture: capital vs social capital

Australia is this week absorbing the final report of the Royal Commission into 'misconduct in the banking, superannuation and financial services industry'.

What is at the heart of the disregard shown by retail banks and finance houses for regulation aimed at protecting consumers from the excesses of the pursuit of profit motive?

As ANU's John Braithwaite has said, a core dilemma of regulation is "when to punish and when to persuade" (1992+).

Command and control-style punishment and sanctions are not the only way to regulate. There are many reasons for non-compliance, suggesting that regulators sometimes need to preference dialogue and engagement over knee-jerk automatic punishment. There is a strong case to be made for regulatory designs and institutional approaches that privilege engagement, persuasion, education, capacity-building. Braithwaite's 'responsive regulation' theory would suggest that regulators hold punitive powers in reserve while making overtures to regulatees and seeing how they respond to non-punitive approaches. The regulator then adjusts its own approach. This will be perceived, the theory goes, as more fair and so legitimate. Entities will internalise the regulatory goal, compliance will improve and the regulator can let compliant entities essentially self-regulate, and indeed exceed what is required in pursuit of the social goal underlying the regulation.

What is a lesson from the Royal Commission?

It is that this approach, as influential as it has been, needs to be revisited. Or at least the theory needs to be fully implemented if it is to work. Not surprising, that.

The lesson is that regulators -- even where they have these powers -- appear reluctant to use them, and so err on the side of 'engagement' where sometimes demonstrative penalty seems more appropriate. The issue is whether the regulated entities are responding to signals to change. If they are not, another more intrusive approach is warranted from the regulator.

Standing back, the key word is in the first sentence above: motive.

Incentives matter: we can talk all we want about 'values not just value' and 'engendering a shift in corporate culture'. But when all is said and done, market actors respond to incentives, and clear, credible and consistent signals and actions from regulators about the consequences of non-compliance.

And those consequences sometimes need to be severe.

As Commissioner Hayne wrote, "misconduct will be deterred only if entities believe that misconduct will be detected, denounced and justly punished..." It is not deterred -- for such profitable entities -- by requiring those found to have done wrong to "do no more than pay compensation." It is certainly not deterred by the issue of infringement notices in the hope that the market or consumers will respond to those incidents by withdrawing or conditioning their custom or financing.

Responsive regulation remains a highly appealing theory, if properly implemented. It is bound to fail -- as Braithwaite and his disciples have always said -- if only partially implemented. If all the cuddly dialogic bits are followed, but not the hard and punitive bits. Regulators can and should talk to their regulatees about how to improve compliance. But they are not mere consultants to business. They are regulators. Braithwaite would insist that the regulatee must know that the regulator can escalate things, where fair and appropriate and where there is no response to overtures to comply. They must know and see that the regulator can make life very difficult.

As Braithwaite once wrote, dialogue, engagement and capacity building must take place "in the shadow of the axe".

Australian regulators need to have the axe, even if they need to be smart and fair about when to keep it in the background and pursue a more engaged approach.

This is true from banking conduct in the retail sector, to emerging models on supply chain reporting in the context of modern slavery, on which see earlier posts on this blog.

Jo