Monday, 27 July 2020

Regulatory culture: punish or persuade?

How do we design 21st century regulatory schemes for responsible business? Regulatory culture must shift, not just corporate culture.

How do we design viable, principled but pragmatic regulatory systems that engage with business in pursuit of goals but are legitimate and trusted by all societal stakeholders?

In particular, what mix of 'enforcement' and 'guidance' is appropriate and effective on the part of the regulator?

The prompt for this post is the interim report on the EPBC Act, Australia's principal federal legislative scheme for environmental protection.

I study 'business and human rights' (social impact) but this emerging field has not done enough to learn from the bitter experience of the conservation and environmental movements and the history of regulation there. (The social and environmental are/ought not so easily be distinguished).

The EPBC Act review has various lessons of interest in my field (e.g. on recent reporting schemes on 'modern slavery' in supply chains), from federal/state coordination to questions about the adequacy and quality and availability of reported data. But what stands out are the lessons in the review about designing enforcement aspects of regulatory schemes where corporate activity may impact on public wellbeing and public interests.

The review condemns federal regulators for settling into a regulatory 'culture' of not using available enforcement powers, and for their over-reliance on a 'collaborative approach to compliance and enforcement' that is 'too weak'.

Last year in a related post on the Royal Commission report into the banking sector I noted the same pattern:

"The lesson is that regulators -- even where they have these powers -- appear reluctant to use them, and so err on the side of 'engagement' where sometimes demonstrative penalty seems more appropriate..."

There are many merits (as I wrote in that 2019 piece) to a regulatory approach that is judicious about the use of enforcement powers, and that privileges cooperative approaches that guides and educates and harnesses companies' own resources (etc) in pursuit of the public policy goal. Moreover, the regulator's dilemma is always 'when to punish and when to persuade'.

But the credible threat of non-negligible punishment may be vital to any strategy of dialogue and engagement. Moreover, enforcement is a form of 'guidance'. Theorists who promoted dialogic and collaborative problem-solving engagement made clear how such regulatory strategies to explain and foster compliance were defensible, but only where the regulated entities know the consequences of non-compliance or perfunctory compliance. A credible pattern of using punitive powers and a reputation for fair but decisive use of enforcement powers is, in this theory, inseparable from the other more 'cuddly' bits about cooperation. Australian regulators have only embraced the latter.

Parking inspectors and fines come to mind. I used to remind my eager 'business and human rights' students -- believers in regulatory capability -- that the Oxford city council has more parking inspectors than the staff at the UN HQ office in New York coordinating the [voluntary] UN Global Compact with business (not an inspector / enforcement entity). The interim review of the EPBC Act shows that since 2010 the total fines issued for breaching environmental approvals is less than the annual amount of traffic fines levied in a typical small local government area in Australia ...

From environmental impact to responsible banking to modern slavery in supply chains, public trust in the regulation of responsible business may require that 21st century regulatory models have some supposedly old-fashioned 'sticks', and use these to incentivise compliance and engagement. This doesn't require that EPBC-type regulators have the blunt 'revenue-raising' approach that parking inspectors do: there is more to regulation than this. 

Schemes like the EPBC Act have a wider purpose as part of efforts to shift behaviours towards socially responsible ones. But the judicious use of enforcement powers clearly has a place in such a scheme.


Here is the related post on regulatory culture.

Wednesday, 24 June 2020

Law and regulation in (and of) crisis

What lessons on the governance of corporate responsibility fall from states' varied COVID responses?

COVID has prompted various reflections on how law is used (and abused) during crises*.

This blog-site focuses on the regulation of responsible business conduct, but this post reflects on more general, higher-order questions about the nature of any regulatory undertaking. (I would like to think my 2015 book was doing the same!).

What strikes me most about the COVID-law-regulation nexus is not the patterns we can see about how powerful state and corporate actors 'never waste a crisis' to pursue all manner of agendas calculated to entrench, advance or indeed obscure that power. Many colleagues'* response to the COVID crisis is, in effect, plead at this time for adherence to legal frameworks e.g. for global cooperation. This is perhaps a plea for law's 'regulatory relevance' (Findlay 2017), yet too often insufficiently couched in analysis of how law is used to regulate crisis -- but selectively or in service of non-inclusive agendas.

This brings me to what strikes me most about law and regulation w.r.t COVID.

This is the huge diversity in the regulatory postures or responses of national governments to what is, after all, a pan-global phenomenon, a pandemic of a virus that itself is non-diverse in that it is essentially the same virus everywhere. (The extent to which those responses rely on law-based rather than other forms of regulation is a separate issue).

Haines has written (2019) on how and why regulation does / does not change in the face of crisis. (She happened, incidentally, to be writing on responses to a factory fire tragedy -- a 'regulation of responsible business' issue).

Her concept of 'regulatory character' is related to what strikes me most about regulation + COVID: how legitimate and effective regulation (and related institutions) is typically not simply about the right technical models and frameworks and standards. It is about underlying economic, social and political idiosyncracies. These shape how regulation actually looks and works. Cultural context shapes regulatory design and response. It is 'responsive' at least in that sense (although, as above, power dynamics shape regulation too, of course!).

Some states have regulated COVID social distancing fairly lightly (e.g. without deploying criminal penalties). In those cases, some of those governments have regulated lightly apparently confident that they can rely and draw upon something relative intangible in the national 'character' about voluntary compliance, cooperation, self-regulation, social cohesion and responsibility -- without necessitating sanctions and penalties.

If I am right, these societal characteristics provide what I might call a regulatory 'resource'. This means the regulator's toolbox (including in crisis) does not just comprise various models and approaches with various merits, trade-offs, etc. It also potentially comprises the repository of societal compliance (etc.) characteristics and inclinations. These must be decisive not only in whether any regulatory intervention gains traction or purchase, but also in how one designs the regulatory response (here, to crisis) in the first place.

Elsewhere (e.g. here) I have reflected -- in the context of regulating responsible business conduct -- that existence and degree of a critical mass of ethically-minded consumers is a principal regulatory 'resource' for regulatory design. Indeed without it, it may not matter how sophisticated (etc.) the regulatory regime otherwise appears.

COVID strikes me that I was potentially onto something. That's all! Scholars of responsible business and its regulation ought perhaps pay more attention to regulatory 'character' and cultural context, including -- in strategy terms -- to better identify the nature and extent of regulatory 'resource' that proposed governance models might seek to take advantage.


* = see here (for example) some short essays by ANU Law colleagues on (international) law and the COVID crisis.

[This is the first post after a 6-month hiatus].

Wednesday, 27 November 2019

Responsible AI: governing market failure

If society seeks or needs responsible development and use of AI technologies, how is this best achieved?

This month the Australian government published its analysis of public submissions on its April 2019 proposed 'Ethical AI Principles', and published a revised set of principles: here. 

In my April submission (in this repository) among other things I put three points, which I summarise here as I believe they remain 'live':

1. A national conversation

The first point was about processes, such as the public enquiry, of arriving at and promoting such lists of principles (whatever their content). This process or that of the Australian Human Rights Commission are no substitute for a genuine, scaled national conversation, indeed a global one. As I submitted, that conversation is not about 'what should our ethical AI principles look like' but (if AI is truly as transformative as we think) about the more fundamental question 'how should we live [and what role do we want and not want for technology in that attempt at flourishing]'.

2. The missing governance piece

The second point was to ask how the listed principles are intended to take or be given effect, which is a question not of ‘principles for ethical AI’ but of ‘the governance of principles for ethical AI’. Every major government and tech company has or is producing such lists. What are the mechanisms by which, in various contexts, we think they are best given effect? Since they are 'ethical' principles, I hesitate to say 'how are they complied with' and 'what are the consequences of non-compliance'. Which leads to my third point.

3. Ethics vs law / regulation

The third point was to suggest that the real question (in seeking submissions) ought not to be whether the 8 listed principles in the Australian framework are the ‘right’ or best or most complete ethical principles. Some ethical AI frameworks have more (e.g. Future of Life's 23), some have less (e.g. the OECD's 5, or Google's 7). The prior question ought to be whether responsible AI development and use is best approached as a question of ethics rather than as a question of law and regulation.

I reflected on this third issue in a previous post (here): there is a very live law and regulation aspect here (as useful as ethics-based approaches are, and complementary to law).

This month's revised approach notes:
  • "The framework may need to be supplemented with regulations, depending on the risks for different AI applications. New regulations should only be implemented if there are clear regulatory gaps and a failure of the market to address those gaps."

This is, on one view, a remarkable proposition, if not an outright abdication of governmental responsibility for promoting responsible AI. 

It is a proposition, unless I am mistaken, that in relation to AI -- which the Australian framework process explicitly states is so fast-evolving, so profoundly transformative, so pervasive -- posits that:

(a) law and regulation is only a 'supplement' to ethics-based approaches; and
(b) the market [whatever that means!] should be left to address 'compliance' with ethical principles, and the people's elected law-making bodies should only have a role where gaps [whatever that means!] are 'clear' .

For one thing, by the time we diagnose that there has been a market failure to encourage or enforce responsible AI development and use, it will be rather too late to start asking law-makers to get out their legislative drafting pens and address 'gaps'.

Lawyers and law-makers can stand down: we are not needed here, or now. Australia, that sophisticated regulatory state, has decided that the market -- which of course has proven soooo socially responsible hitherto -- can regulate this issue just fine.


Thursday, 25 July 2019

Modern Slavery reporting laws: a study

One way in which the 'business and human rights' agenda is manifesting in national-level laws is through legislation -- most recently in Australia -- to require larger firms to report periodically on risks of 'modern slavery' within their operations and supply chains.

We have produced a report on how Australian firms appear to be preparing to reporting under the 2018 Modern Slavery Act.

Is the Act at risk of become a mere tick-box exercise, or will it help drive a more fundamental transformation of approaches to human rights risks in supply chains:

"... Despite mixed levels of awareness, a common refrain in interviews was that the reporting requirement was a ‘conversation starter’ (including, importantly, within firms) even if not a ‘conversation changer’, although for some it had achieved the latter..."

Here is a link to the report (with M. Azizul Islam and Justine Nolan).


Ps: for some recent blogs on reporting laws on Modern Slavery, see this post (and links within).

Wednesday, 22 May 2019

'Ethical AI', business, and human rights

How and where does a human rights approach fit into current conversations about 'ethical Artificial Intelligence'?

I'm preparing my submission, due 31 May, to the Australian government's enquiry paper on ethical AI.

Naturally as a 'business and human rights' scholar I am among other things curious about the focus on ethical framings for these questions and issues, relative to legal and regulatory ones (including by reference to human rights concepts and law).

We're currently experiencing a cascade of words as various governmental, inter-governmental, corporate and professional bodies produce ethical frameworks. The Australian discussion paper suggests 8 core principles (fairness, accountability, explainability, etc); the recent European Commission one suggests 7 principles; Google advances 7, Microsoft 6, and so on -- all unobjectionable but inherently ambiguous, context-contingent terms / values / concepts. [See here for one recent inventory -- an attempt to list all these lists of ethical AI principles ... ]

This cascade of normative frameworks is accompanied by a tilt towards a greater focus on governmental action: a regulatory consciousness on ethical AI has been late coming, but is afoot (see here, for example: 'US to back international guidelines...'). Tech giants are calling for rather than necessarily resisting regulation.

While the gist of my upcoming May submission is that this subject-matter is about more than ethics in these sense that there's a law and regulation piece here (as useful as ethics-based approaches are, and complementary to law).

Yet in our chagrin as lawyers at the belated recognition that our discipline matters here, there is something more. These issues may be 'bigger' than ethics, but they are also bigger than and beyond just a conventional debate on law and governance. Certainly, human rights law is not necessarily and ideal vehicle for conducting and framing that debate.

What is involved around responsible innovation debates is really asking some fundamental questions about the future shape of human society. While necessary to this debate, law and especially human rights law are limited as a vernacular for having those debates.

In a seminar on May 8 I quoted Harari (2018) who rightly notes that we need a shared and coherent 'story' of what these technologies are for, how they do or do not advance a society of the sort that we want and recognise as 'good' and 'just':

".... We cannot continue this debate indefinitely … [v]ery soon someone will have to decide how to use this power [AI, etc] – based on some implicit or explicit story about the meaning of life … engineers are far less patient, and investors are the least patient of all. If you do not know what to do with the power [of these technologies, but also the power of how to govern them], market forces will not wait a thousand years for … an answer. The invisible hand of the market will force upon you its own blind reply..."


See previous posts on responsible innovation here.