Thursday, 14 March 2019

Business, human rights and responsible innovation

We are increasingly governed and influenced by algorithms and predictive analysis.

The use by governments and businesses of artificial intelligence / machine learning (AI/ML) platforms can impact on human rights in myriad ways.

We have moved from debating whether governments need to regulate AI's potential discriminatory (etc.) effects, to questions of how best to do so in a legitimate, effective and coherent way: enabling innovation while protecting fundamental values and interests.

The nexus of 'new tech' and 'human rights' is presented as an emerging issue. Yet the rate of change and the implications of AI (etc.) across so many aspects of life suggest that it is only a regulatory consciousness that is still 'emerging'. All else is well underway.

Yes, we are far from the shallows now (as Lady Gaga / Bradley Cooper sing in A Star is Born (2018)): we are well in the deep waters now of how best to regulate for responsible innovation. And those deep waters are fast-moving ones, far faster than most regulatory and legal systems have moved.

This post relates to my hasty and under-cooked submission last week to the Australian Human Rights Commission / WEF 'White Paper' on 'AI and Human Rights: Leadership and Governance', itself related to a wider consultation (2018, ongoing).

One point made in that submission was a reflection on big tech firms' approach to the regulatory question. (This post is confined to that reflection -- the responsible innovation regulatory agenda is a far bigger and more complex one.)

The Commission's reports detail how influential CEOs -- from Microsoft to Amazon to Facebook -- are all now calling for or conceding the need for governmental regulatory frameworks on ethical AI / social impact / human rights (and these are not all the same thing, as my submission notes!).

These CEOs thus recognise the shift to the 'how' question, and are partly behind that shift, calling for regulation. Salesforce's CEO said at Davos last year that the role of governments and regulators was to come in and "point to True North".

Now most commentators have welcomed this. Like the Commission, they add this CEO's call to the chorus ('at least they are not resisting regulation' and 'business is inviting government to lead and steer'. A good thing).

Yet is it only me who finds something hugely troubling about this statement?

It is this. Is big tech so lacking in moral substance that it needs government to point out 'True North' (a set of general principles to guide AI design and use)? 'True North' is by definition universal and fairly easy to establish. Non-discrimination, user privacy, access to review and reasons for adverse decisions. These were basic societal values last time I looked at western democracies. They do not require governmental steer or compass reading for business. Get on with it, already.

Governments must lead the responsible innovation agenda, not least because their own use of AI is a key issue. Yet on the Salesforce CEO's statement, if industry cannot arrive at these values of its own accord, we truly are far from the shallows. As Lady Gaga sings, how will we remember ourselves this way -- before AI made life unrecognisable? 

Jo

Ps -- see an earlier blog here on 'big data' and human rights, and this one from November last year putting some of these themes into a short poem... !?

No comments:

Post a Comment