Wednesday, 24 June 2020

Law and regulation in (and of) crisis

What lessons on the governance of corporate responsibility fall from states' varied COVID responses?

COVID has prompted various reflections on how law is used (and abused) during crises*.

This blog-site focuses on the regulation of responsible business conduct, but this post reflects on more general, higher-order questions about the nature of any regulatory undertaking. (I would like to think my 2015 book was doing the same!).

What strikes me most about the COVID-law-regulation nexus is not the patterns we can see about how powerful state and corporate actors 'never waste a crisis' to pursue all manner of agendas calculated to entrench, advance or indeed obscure that power. Many colleagues'* response to the COVID crisis is, in effect, plead at this time for adherence to legal frameworks e.g. for global cooperation. This is perhaps a plea for law's 'regulatory relevance' (Findlay 2017), yet too often insufficiently couched in analysis of how law is used to regulate crisis -- but selectively or in service of non-inclusive agendas.

This brings me to what strikes me most about law and regulation w.r.t COVID.

This is the huge diversity in the regulatory postures or responses of national governments to what is, after all, a pan-global phenomenon, a pandemic of a virus that itself is non-diverse in that it is essentially the same virus everywhere. (The extent to which those responses rely on law-based rather than other forms of regulation is a separate issue).

Haines has written (2019) on how and why regulation does / does not change in the face of crisis. (She happened, incidentally, to be writing on responses to a factory fire tragedy -- a 'regulation of responsible business' issue).

Her concept of 'regulatory character' is related to what strikes me most about regulation + COVID: how legitimate and effective regulation (and related institutions) is typically not simply about the right technical models and frameworks and standards. It is about underlying economic, social and political idiosyncracies. These shape how regulation actually looks and works. Cultural context shapes regulatory design and response. It is 'responsive' at least in that sense (although, as above, power dynamics shape regulation too, of course!).

Some states have regulated COVID social distancing fairly lightly (e.g. without deploying criminal penalties). In those cases, some of those governments have regulated lightly apparently confident that they can rely and draw upon something relative intangible in the national 'character' about voluntary compliance, cooperation, self-regulation, social cohesion and responsibility -- without necessitating sanctions and penalties.

If I am right, these societal characteristics provide what I might call a regulatory 'resource'. This means the regulator's toolbox (including in crisis) does not just comprise various models and approaches with various merits, trade-offs, etc. It also potentially comprises the repository of societal compliance (etc.) characteristics and inclinations. These must be decisive not only in whether any regulatory intervention gains traction or purchase, but also in how one designs the regulatory response (here, to crisis) in the first place.

Elsewhere (e.g. here) I have reflected -- in the context of regulating responsible business conduct -- that existence and degree of a critical mass of ethically-minded consumers is a principal regulatory 'resource' for regulatory design. Indeed without it, it may not matter how sophisticated (etc.) the regulatory regime otherwise appears.

COVID strikes me that I was potentially onto something. That's all! Scholars of responsible business and its regulation ought perhaps pay more attention to regulatory 'character' and cultural context, including -- in strategy terms -- to better identify the nature and extent of regulatory 'resource' that proposed governance models might seek to take advantage.


* = see here (for example) some short essays by ANU Law colleagues on (international) law and the COVID crisis.

[This is the first post after a 6-month hiatus].

Wednesday, 27 November 2019

Responsible AI: governing market failure

If society seeks or needs responsible development and use of AI technologies, how is this best achieved?

This month the Australian government published its analysis of public submissions on its April 2019 proposed 'Ethical AI Principles', and published a revised set of principles: here. 

In my April submission (in this repository) among other things I put three points, which I summarise here as I believe they remain 'live':

1. A national conversation

The first point was about processes, such as the public enquiry, of arriving at and promoting such lists of principles (whatever their content). This process or that of the Australian Human Rights Commission are no substitute for a genuine, scaled national conversation, indeed a global one. As I submitted, that conversation is not about 'what should our ethical AI principles look like' but (if AI is truly as transformative as we think) about the more fundamental question 'how should we live [and what role do we want and not want for technology in that attempt at flourishing]'.

2. The missing governance piece

The second point was to ask how the listed principles are intended to take or be given effect, which is a question not of ‘principles for ethical AI’ but of ‘the governance of principles for ethical AI’. Every major government and tech company has or is producing such lists. What are the mechanisms by which, in various contexts, we think they are best given effect? Since they are 'ethical' principles, I hesitate to say 'how are they complied with' and 'what are the consequences of non-compliance'. Which leads to my third point.

3. Ethics vs law / regulation

The third point was to suggest that the real question (in seeking submissions) ought not to be whether the 8 listed principles in the Australian framework are the ‘right’ or best or most complete ethical principles. Some ethical AI frameworks have more (e.g. Future of Life's 23), some have less (e.g. the OECD's 5, or Google's 7). The prior question ought to be whether responsible AI development and use is best approached as a question of ethics rather than as a question of law and regulation.

I reflected on this third issue in a previous post (here): there is a very live law and regulation aspect here (as useful as ethics-based approaches are, and complementary to law).

This month's revised approach notes:
  • "The framework may need to be supplemented with regulations, depending on the risks for different AI applications. New regulations should only be implemented if there are clear regulatory gaps and a failure of the market to address those gaps."

This is, on one view, a remarkable proposition, if not an outright abdication of governmental responsibility for promoting responsible AI. 

It is a proposition, unless I am mistaken, that in relation to AI -- which the Australian framework process explicitly states is so fast-evolving, so profoundly transformative, so pervasive -- posits that:

(a) law and regulation is only a 'supplement' to ethics-based approaches; and
(b) the market [whatever that means!] should be left to address 'compliance' with ethical principles, and the people's elected law-making bodies should only have a role where gaps [whatever that means!] are 'clear' .

For one thing, by the time we diagnose that there has been a market failure to encourage or enforce responsible AI development and use, it will be rather too late to start asking law-makers to get out their legislative drafting pens and address 'gaps'.

Lawyers and law-makers can stand down: we are not needed here, or now. Australia, that sophisticated regulatory state, has decided that the market -- which of course has proven soooo socially responsible hitherto -- can regulate this issue just fine.


Thursday, 25 July 2019

Modern Slavery reporting laws: a study

One way in which the 'business and human rights' agenda is manifesting in national-level laws is through legislation -- most recently in Australia -- to require larger firms to report periodically on risks of 'modern slavery' within their operations and supply chains.

We have produced a report on how Australian firms appear to be preparing to reporting under the 2018 Modern Slavery Act.

Is the Act at risk of become a mere tick-box exercise, or will it help drive a more fundamental transformation of approaches to human rights risks in supply chains:

"... Despite mixed levels of awareness, a common refrain in interviews was that the reporting requirement was a ‘conversation starter’ (including, importantly, within firms) even if not a ‘conversation changer’, although for some it had achieved the latter..."

Here is a link to the report (with M. Azizul Islam and Justine Nolan).


Ps: for some recent blogs on reporting laws on Modern Slavery, see this post (and links within).

Wednesday, 22 May 2019

'Ethical AI', business, and human rights

How and where does a human rights approach fit into current conversations about 'ethical Artificial Intelligence'?

I'm preparing my submission, due 31 May, to the Australian government's enquiry paper on ethical AI.

Naturally as a 'business and human rights' scholar I am among other things curious about the focus on ethical framings for these questions and issues, relative to legal and regulatory ones (including by reference to human rights concepts and law).

We're currently experiencing a cascade of words as various governmental, inter-governmental, corporate and professional bodies produce ethical frameworks. The Australian discussion paper suggests 8 core principles (fairness, accountability, explainability, etc); the recent European Commission one suggests 7 principles; Google advances 7, Microsoft 6, and so on -- all unobjectionable but inherently ambiguous, context-contingent terms / values / concepts. [See here for one recent inventory -- an attempt to list all these lists of ethical AI principles ... ]

This cascade of normative frameworks is accompanied by a tilt towards a greater focus on governmental action: a regulatory consciousness on ethical AI has been late coming, but is afoot (see here, for example: 'US to back international guidelines...'). Tech giants are calling for rather than necessarily resisting regulation.

While the gist of my upcoming May submission is that this subject-matter is about more than ethics in these sense that there's a law and regulation piece here (as useful as ethics-based approaches are, and complementary to law).

Yet in our chagrin as lawyers at the belated recognition that our discipline matters here, there is something more. These issues may be 'bigger' than ethics, but they are also bigger than and beyond just a conventional debate on law and governance. Certainly, human rights law is not necessarily and ideal vehicle for conducting and framing that debate.

What is involved around responsible innovation debates is really asking some fundamental questions about the future shape of human society. While necessary to this debate, law and especially human rights law are limited as a vernacular for having those debates.

In a seminar on May 8 I quoted Harari (2018) who rightly notes that we need a shared and coherent 'story' of what these technologies are for, how they do or do not advance a society of the sort that we want and recognise as 'good' and 'just':

".... We cannot continue this debate indefinitely … [v]ery soon someone will have to decide how to use this power [AI, etc] – based on some implicit or explicit story about the meaning of life … engineers are far less patient, and investors are the least patient of all. If you do not know what to do with the power [of these technologies, but also the power of how to govern them], market forces will not wait a thousand years for … an answer. The invisible hand of the market will force upon you its own blind reply..."


See previous posts on responsible innovation here.

Tuesday, 2 April 2019

Modern slavery reporting: what it is/not

Some legislative schemes have unforeseen consequences on the upside. They achieve far more than their particular remit, and capture or catalyse a wider shift.

Others generate unreasonable expectations: laws can only do so much, even in developed regulatory states, and especially without the accompaniment of more profound and clear messages from markets and people about the kinds of behavioural and cultural changes they want companies to exhibit.

Australia's Modern Slavery Act came into effect this year. It requires larger firms report annually on whether and what steps they are taking to manage these human rights risks within their operations and supply chains.

Last week the government released draft guidance for reporting entities (here).

In this context I attempt 3 propositions about what this statutory scheme may represent, and 3 things that it doesn't necessarily represent: what is it not?

'What it maybe is'

1. The Act is -- like its UK predecessor and whatever its shortcomings -- a landmark achievement in bringing to corporate boards across Australia a new awareness of the human rights risks sometimes associated with mainstream business and financial activity, and the extent of regulatory intent that exists around these.

2. The Australian Act can be seen as part of a wider pattern, at least in OECD countries, of statutory requirements to undertake human rights due diligence or at least report on such activities -- even if domestic and other regulatory manifestations of the UN Guiding Principles on Business and Human Rights are still very piecemeal.

3. Engagement with the Act by corporations (and their advisory firms) on this particular human rights risk may drive wider awareness and uptake of the fuller 'business and human rights' agenda, but will not necessarily do so.*

'What it perhaps is not'

1. The Act is one corporate reporting mechanism among many for big firms (and modern slavery is only one class of business & human rights issue): as the Act beds down there are no guarantees that this will remain a distinctive high-profile issue ...

2. An external reporting requirement (even one with board-level sign-off) is no guarantee that firms and funds will take and sustain effective internal procedural, operational and cultural changes relevant to preventing and remedying modern slavery risk.

3. Even fulsome corporate compliance with the Act (and internalisation of its purpose) will not necessarily have a discernible or material effect on the prevalence of modern slavery in our region.

That last point is a reminder of the risk of such legislation becoming an Australian regulatory salve for our own consciences (here).

There are many other things one could add about what the Act is not. The Act is not a panacea. The Act is not mere pandering to corporations. The Act is not victim-focused or remedial in nature.

And so on. For one thing, advocates and academics in 'business and human rights' often talk of these corporate reporting models as new. They are not. They are only new to this field, which would benefit from more research couched in lessons about what reporting requirements, and non-penal ones in particular, can and cannot do to drive progress on the underlying issue with which they deal. 


* In 2018 research with Justine Nolan and M. Azizul Islam (publication forthcoming) we observe that many corporate officers and others see the Act as, at very least, a 'conversation starter' within firms and more widely.

See here for a recent post on compliance risk under the Act, and here for a different take (modern slavery approached in verse...).