Chief executives should lead governance on the use of artificial intelligence inside companies as the new technology becomes increasingly influential in business strategy, according to a senior AI expert.
Maria Axente, head of AI public policy at PwC and a renowned author on the practical uses of the technology, told a breakfast briefing that other corporate roles do not have the power to make the key decisions necessary to manage the governance issues prompted by the application of artificial intelligence.
In particular, Axente said it could not be the sole responsibility of general counsels—in-house lawyers—because AI poses questions that go beyond compliance.
“Ultimately, those issues need to be owned by the CEO,” said Axente, “no one lower than the CEO.
“Not the legal function, because Legal can very easily go down the compliance route.”
Axente said only the CEO could really make the key calls on the “ethical” issues that come with AI.
She offered the example of an HR department where several people received authorisation to create chatbots, raising the possibility that staff might end up talking to several of these, rather than a human HR staff member.
“This is not a question for Legal. It is not a question for Risk. It’s more a question of, ‘How do we work together as employees?’”
Axente’s remarks came as part of a special briefing, hosted by Board Agenda and Diligent, to discuss not only where AI is being integrated into business but also how it raises key governance concerns.
AI regulation
AI also raises key regulatory questions. The new UK government has pledged new rules for development and use of AI, while the European Union has already introduced an AI Act, which mandates reporting and disclosure duties among AI developers.
The impact of ChatGPT since November 2022 has driven business and other organisations into a rush for generative AI products, as well as much soul searching over the way the technology can be applied.
Axente’s comments came in response to Diligent’s Dale Waterman, a legal and compliance expert previously with Microsoft, who said general counsels were set to play a greater role in managing AI because of the new AI laws coming into force.
“They also play a role where they can help with ethical decision making, in the sense of they can give guidance to their CEO, if they have a good relationship.”
Fellow panellist, Caroline Cartellieri, a non-executive director and a consultant on digital transformation, said boards would have to work to come to terms with genAI causing cybersecurity risks, regulatory risks and the fact that the technology currently “hallucinates”, meaning it invents answers.
“You need to think about the guardrails and the frameworks to put in place. They need to be relatively broadly defined, because you can’t possibly anticipate all the different things that could happen,” she said.
Panellists agreed that board members should engage their curiosity about AI and educate themselves about the technology.
John Davies, governance director with the consultancy Beyond Governance, told the briefing that existing board structures should be able to cope with AI governance.
“The last thing you want is a new committee, and to create a new set of meetings.
“We’ve got audit and risk committees, so maybe look at their terms of reference. But get this on the agenda.”
Read Diligent and Board Agenda’s report: Demystifying AI: What does it really mean for the boardroom?