With technology advancing at a mind-altering pace, there has been much talk of introducing “robot directors”, or artificial intelligence, to the boardroom.
However, long-standing concerns that AI might short circuit legal protections for human directors have so far kept the boardroom mostly off limits for advanced technologies. A new paper from legal experts claims those fears can be addressed.
Joseph Lee and Peter Underwood of the University of Exeter argue that protections for flesh-and-blood board members can be maintained with a change in the law, the introduction of specialist directors and new regulation to ensure AI technology meets agreed standards.
They also argue that AI can help boards counter “short termism” in decision-making and boost their ability to consider a brand range of stakeholders.
Lee and Underwood write that “not only can AI transform governance to be a more inclusive construct, but that is possible with minor reconsiderations to the current legislation. This in turn will generate a greater value for both shareholders and stakeholders in the longer term.”
‘Robot directors’ and legal duties
There has been much discussion of robot directors. In 2014 Deep Knowledge Futures, a Hong Kong venture capital firm, even appointed a computer algorithm to its board to demonstrate how it could work. But there have been concerns that reliance on AI for decision-making could prompt claims that directors have failed to fulfil their duties under the law.
Lee and Underwood identify two duties directors should be wary of: a duty to exercise “independent judgment” and to apply “reasonable care, skill and diligence”.
They say the risk of a breach of independent judgment could be dealt with by including a provision in the UK Companies Act that a director should act on AI-driven information “honestly”. “Without such legal support, it would be difficult for directors to utilise AI systems without the risk of being in breach of their duties,” they write.
They also argue the demand to act with reasonable care could be tackled by introducing a specialist board member to monitor AI. “An expert in technology will be responsible for overseeing the input of data and checking on the functionality of the AI to ensure it is running in accordance with the agreed coding for decision-making.”
But they also say that boardroom AI is in need of regulation so that before it comes into use “it needs to be first subjected to control checks and issued with a compliance statement”. The statement should be issued by an independent body, they say.
AI in the boardroom
Other proposals have been made to ensure legal protection for directors using AI. Last year Australian lawyer Samar Ashour argued that the problem was the insistence in the law that directors be “natural persons”.
This could be tackled with a change to a company’s constitution and as a result avoiding changes in the law. “If the use of AI is confined to administrative aspects of corporate governance, such as meeting preparations, as opposed to work that encompasses operational and strategic decision-making, the view can be taken that directors are less likely, if not unlikely, to be held accountable by reason of reliance on AI,” Ashour writes.
Others have observed that AI could “augment” almost half of non-executive tasks. Writing in the Harvard Business Review, Ravin Jesuthasan and Shai Ganu of Willis Towers Watson say: “NEDs could likely spend more of their time as internal consultants and advisers to the CEO and management.”
The exploration of AI and how it could be become an effective board members is still under way. There is clearly an interest in business circles. It may just need a political will now to bring about changes in the law and make it happen.