Back in April a report from a think tank at New York University made headline news around the world. The report complained of a startling lack of diversity in artificial intelligence (AI) companies and claimed some AI systems were replicating “patterns of racial and gender bias”.
This was just the latest in a string of stories that have cast doubt on the reliability of ethics embedded in AI. This week the UK’s Institute of Business Ethics (IBE) has gone part way to address the issue by publishing a report that emphatically identifies AI as a governance issue firmly in the remit of company boards and not to be left solely in the hands of tech heads.
The report’s message was as simple as it was challenging: AI does not not require specialist knowledge to be governed properly, indeed boards are probably better off without a tech specialist lording it over them. AI, says the report’s author, raises ethical and philosophical problems. It should be considered as part of a boardroom risk discussion, and urgently.
Peter Montagnon, author of the report and associate director of the IBE, told a gathering this week in London: “Artificial intelligence is here to stay, it won’t go away and like it or not, we have to get to grips with it.”
His underlying message was that board directors need to confront the issue, despite the intimidating reputation of the topic. Many directors, worries Montagnon, have consigned the issue to the “too difficult drawer”, reluctant to admit a weakness.
That’s unfortunate. According to some figures around 80% of global business will adopt AI because they believe it will give them a competitive edge. A survey by research house Gartner of 3,000 chief information officers (CIOs) revealed that AI is the most talked-about tech, bumping data and analytics into second place. Almost half of CIOs—46%—said their companies were planning to deploy it. Global business value derived from AI was expected to reach $1.2trn in 2018.
The AI accountability gap
Despite the hype, AI experienced a torrid year in 2018 that did much to make suspicion of the technology go viral. Amazon scrapped an AI recruitment tool because it was allegedly biased against women; Facebook became embroiled in a scandal over the collection of user data by Cambridge Analytica and Google withdrew from a Pentagon drone project and was forced to pledge never to use its AI for weapons development. The company even went so far as to launch an AI ethics board, though this was shut down in May after doubts were expressed about certain members.
In reviewing last year AI Now, a New York University think tank, summed up one of the key issues: the accountability gap is growing. Indeed, scandals demonstrate that the gap “between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller.”
The think tank listed the reasons for this as a lack of government regulation; a highly concentrated AI sector; insufficient governance structures within companies; power asymmetries between corporates and the people they serve; and a “stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed”.
That’s a daunting database of problems to set against the hoped-for advantages; AI may come with the promise of extraordinary developments but is clearly not without ethical gremlins. That clash presents the best possible argument for boards to get to grips with governance.
Aware of the problems the OECD (Organisation for Economic Cooperation and Development) has this year published its own set of AI Principles. The document attempts to tackle the dual nature of AI, recognising both the opportunities and risks. It calls on governments to encourage innovation in “trustworthy AI” and offers a list of principles for governing the use and development of the technology. The body says the principles “set standards for AI that are practical and flexible enough to stand the test of time in a rapidly evolving field”.
A philosophical approach
In his report, Montagnon issues boards with a series of challenges:
- Are the benefits of AI being shared?
- Who is accountable?
- Do AI algorithms produce biased results?
- Does AI add value to customers or extract value from them?
- Does it treat employees and contractors fairly?
- How are cyber attacks to be managed?
- Can codes of ethics help?
- Is the board still in control?
The whole adds up to challenging businesses to use the technology in a values-driven way. And prompts Montagnon to write: “Perhaps contrary to intuitive expectations, the skills needed to address these challenges require less of a technical mastery of the inner workings of AI than a philosophical and ethical approach to resolving the issue thrown up.”
AI, Montagnon says, cannot function without a human element. Questions are “less about the technology itself than how it is applied”. So, the board’s decisions about AI “fit naturally into their general view of risk appetite, risk management and oversight”.
There is broad agreement that the technology does, in fact, place skills from the social sciences at a premium when it comes to confronting AI conundrums. Though powered down by management, Google’s ethics board—known as the Advanced Technology External Advisory Council—featured not only tech specialists but also a philosopher, a public policy expert and a former diplomat and international affairs expert. Facebook has reportedly supported the creation of an independent ethics centre with the Technical University of Munich.
The argument against the use of tech experts is that they may not have the experience to manage the full gamut of board responsibilities. Plus it could encourage other board members to defer to them on tech issues, leaving AI questions underexplored. There are also practical problems such as a shortage of available tech experts with the right level of experience, though this should be resolved over time.
This technology is not going away, and its use will continue to raise a host of big questions. Pessimists worry it could play a part in the extinction of human life; optimists look to a day when it liberates us from mundane jobs to dedicate ourselves to more worthy pursuits. The threats and opportunities mean boards have to be alert, learn the key questions and look for the right answers. If they fail, the consequences don’t bear thinking about.