Watchdogs and regulators should be addressing the use of artificial intelligence in cases where companies are effectively using it as a board director, according to a leading governance expert.
The warning comes after Abu Dhabi’s International Holding Company (IHC), the UAE’s most valuable company, announced it was introducing an AI entity, “Aiden Insight”, to its board to help with decision making.
US-based governance adviser Alissa Kole says there has been “surprisingly little” coverage of the decision but it demands attention. Not least in regard of the legal ramifications for boardroom accountability.
So far, regulators have not been forced to confront questions of artificial intelligence playing a significant role in boardrooms’ decision-making. But IHC’s recent move may be the catalyst.
“Neither the OECD governance principles nor national standards address the role of AI apart from the expectation that boards consider technology risks,” writes Kole for the Harvard Law School governance blog.
“In order for market regulators not to be caught by surprise—as their peers have been in the face of cryptocurrency or car-sharing innovations—the potential role of AI board members needs to be considered now.”
Rule makers should be looking at the legal responsibilities of AI board members and whether their fiduciary duties clearly defined. Implications for board committee participation and boardroom diversity are “only now starting to surface,” Kole argues.
“The latest announcement from Abu Dhabi highlights that there is no time to waste.”
Aiden Insight is not the first AI to participate in boardroom decisions. In 2014, Hong Kong-based venture capital firm Deep Knowledge Ventures said it had opened up its board to an “algorithm” named “Vital”. It was denied at the time that the technology amounted to an AI.
Kole’s concern is not jurisdictions where laws prohibit anyone but “natural persons” from being board members—such as the US, UK and Australia—but what happens if boards come to rely on an AI so much that it has a de facto “veto” over decision making.
AI and hindsight
In 2017, Deep Knowledge Ventures’ Dmitriy Kaminskiy was quoted as saying: “As a board, we agreed that we would not make positive investment decisions without corroboration by Vital.”
Kole may be right in the urgency about AI regulation. Research in February by Goldman Sachs shows that, in the US, AI was mentioned in as many as 36% of S&P 500 earnings calls.
KPMG recently posted research suggesting that more than a third of financial service firm CEOs were using generative AI—such as ChatGPT—at work. Many were seeking financial advice. Another third of leaders say they don’t trust AI.
Karim Haji, UK head of financial services at KPMG, says: “As leaders continue to get to grips with the technology and learn iteratively, this will not only help build proof-of-concepts around external use cases but instil a culture that generative AI becomes part of, from the top down.”
While there may be few signs of regulatory movement, others have been mulling the appearance of AI in boardrooms for some time.
Last year, Google UK AI unit’s chief executive Demis Hassabis, warned chief executives that they were putting their companies “at risk” unless they had specialist ethics advisers at hand to provide guidance on using AI.
Elsewhere, research suggests companies would need to consider “two layers” of AI governance: one operational, the other ethical.
Generative AI is a tool still under development and its uses still emerging. But it is evolving fast. Some experts have reffred to “souble exponential” development in its capabilities. Policymakers and watchdogs may have to move fast if they are to keep pace with its use in the boardroom.