Businesses are facing an Orwellian nightmare that few even recognise exists. As they become increasingly dependent on artificial intelligence, these same technology systems are increasingly being run without any human intervention.
By way of example, the governance of blockchain technology is set to leave organisations distinctly vulnerable as algorithmic blips in the system are likely to remain undetected because a counter algorithm oversees them.
There are many lessons to be learnt from the long-running Post Office scandal.
Between 2000 and 2014 the Post Office prosecuted 736 sub-postmasters and postmistresses, an average of one a week, based on data compiled by a computer system called “Horizon”, a Fujitsu development first installed in 1999.
According to the resulting High Court ruling the Horizon system contained “bugs, errors and defects”. It was identified as posing a material risk to some 2,400 postmasters and postmistresses through its handling of in-branch accounts.
Those victims unable or unwilling to pay these supposed shortfalls were prosecuted for theft, false accounting and in some cases fraud, based on the IT evidence alone and without proof of criminal intent.
The Post Office management did not accept responsibility for any supposed system error, despite one sub-postmaster reporting concerns in 2000, as did a number of IT professionals external to the Post Office.
Error in an AI system
It took 20 years, numerous failed investigations and a class action civil litigation suit by 550 sub-postmasters and postmistresses to acknowledge the innocence of the claimants. The BBC caricatured the convictions as “the UK’s most widespread miscarriage of justice”.
What if similar malfunctions occur in the future and once again bypass all human intervention? Error in an AI system may be far more damaging and much harder to detect.
When making a decision, AI relies on inbuilt algorithms and a massive amount of data which it processes to arrive at certain “conclusions”. For an AI system to perform effectively, it is critical to create an environment which is as unambiguous and predictable as possible. This, of course, assumes that the algorithms invoked are unbiased from inception.
In real life, this is a major challenge to recreate artificially. Humans have a safety net that enables us to operate in uncertain and ever-changing environments as a way of coping with uncertainty and vagueness. Machines and software don’t.
Governance of AI learning
For AI to deliver impeccable service in the human world, systems needs to learn to think like a human, which immediately raises the vexed question of who or what controls the AI learning system?
The interaction between Facebook chatbots left to their own devices shows how systems can quickly develop a language all of their own that is incomprehensible to humans. This offers little danger in of itself.
However, having to debug AI learning systems through a process of reverse-engineering is a long and arduous process. Imagine if the Post Office scandal had occurred without any humans being in the loop? How many more contractors would have been prosecuted and when, if ever, would such a miscarriage of justice come to light?
The governance of AI learning systems is both a board and government concern. The reason is fundamental—when the system is challenged, there is an instinctual reaction to defend the system.
Independent stewardship of our institutions is needed now more than ever. The emergent AI world in which we increasingly operate requires an independent “custodian of information” with the function of undertaking investigations into suspected AI-led injustice. This should include the freedom to dig deeply into the context of each individual circumstance and report findings in an fair and equal manner.
In an environment full of of system errors overseen by AI, the governance of the future requires resolute humans who can pursue matters and use independent oversight. Unfortunately our present-day “compliance mentality” is likely to stifle the creation of such a body, as the reliance it creates would likely be reduced to completing yet another arduous but legally binding checklist.
Nada Kakabadse is professor of policy, governance and ethics, and Andrew Kakabadse is professor of governance and leadership, at Henley Business School.