Chief executives are putting themselves “at risk” unless they appoint specialists to advise on the ethical use of artificial intelligence (AI), according to the UK’s leading business ethics organisation.
The warning comes in the same week Google UK’s AI chief executive, Demis Hassabis, said the risk from AI should be taken as seriously as the risk from climate change.
The warning also comes just days before the UK hosts the world’s first global AI Safety Summit at Bletchley Park, the location where code breaker and mathematician Alan Turing built the Bombe machine, paving the way for the first computer.
But there is a focus on the use of AI by corporates. The Institute for Business Ethics (IBE) has published guidance on the use of AI, urging companies to review the use of AI, train staff on its ethical implications and communicate the need for sound AI ethics throughout their supply chain. They must also put key ethics advisers in place.
According to Ian Peters, director of IBE, the use of AI must be a “shared responsibility” and CEOs that fail to appoint an individual or committee to “ensure ethical practice” are putting themselves and their companies at risk. At stake are privacy rules or damage to corporate reputations.
“AI can be a powerful tool for business, supporting planning, enabling productivity and profitability,” says Peters.
“But it comes with risks, and companies that fail to put steps in place to ensure they use artificial intelligence ethically will face difficulties in the longer term.”
This week, Demis Hassabis told The Guardian that a new international body was needed to supervise the AI industry. “We must take the risks of AI as seriously as other major global challenges, like climate change,” he said.
The World Economic Forum estimates that AI will “displace” 85m jobs by 2025, but it adds AI will also create 97m jobs over the same period.
Many are sceptical of those figures, pointing out that many variables, such as the rate of take-up, level of investment, R&D progress and the ready availability of AI experts will all play a part in its development.
In an article for Board Agenda, Henley Business School professors Nada and Andrew Kakabadse, warn boards to keep watch for “unproven tech”, but also the risk of creating inequality and job “displacement.”
“Always keep in mind the ethical questions about how AI will impact the labour market and society as a whole,” they write.
Business leaders themselves appear to view AI with optimism. A survey of 1,200 chief executives by EY, a professional services firm, reveals 65% believe it is a “force for good”.
However, their confidence is tempered by wariness. The same survey found 67% believe business leaders need to “focus” on AI ethics. Another 64% worry about AI’s “unintended consequences”.
According to Andrea Guerzoni, EY global vice chair, CEOs reflect a “sometimes dystopian” view of AI portrayed in the media.
“They see a role for business to address these fears,” he says, “ an opportunity to engage on the ethical implications of AI and how its use could impact key areas of our lives, such as privacy.”
There are still many unknowns about the capability and impact of AI. Business views it as a possible value added tool, but acknowledges at the same time that there are risks, including around ethics. The IBE’s latest guidance might help them find a way through.