A leading expert from Harvard University says corporate governance is unable to “handle” the potential catastrophic risk of artificial intelligence and that governments will need to step up to what is described as an “historic task”.
The claim comes from Roberto Tallarita, a professor who has written extensively on modern corporate governance, writing in the Harvard Business Review.
Tallarita concludes that, in the event that AI becomes “uncontrollable”, it will be too late for corporate governance, or a board concerned with safety to “pull the plug”. Given the risks, only governments are in a position to regulate.
“Even the most creative corporate governance innovations cannot be a long-term substitute for the public governance of catastrophic risks.
“While good corporate governance can help in the transitional phase, the government should quickly recognise its inevitable role in AI safety and step up to the historic task.”
Tallarita’s warning about the limits in corporate governance to ensure the safety of AI comes after a boardroom debacle at OpenAI—developers of the AI tool ChatGPT—which saw the sacking of chief executive Sam Altman, only for him to return a week later.
Altman was fired by a “not for profit” board, charged with enforcing the company charter of providing “benefits to all humanity”, after it concluded the CEO was “not consistently candid”.
The board came under pressure from “for-profit” shareholders, including Microsoft, resulting in Altman’s return and the appointment of new board members.
In his article, Tallarita presents five lessons to be drawn from the episode. The first is that if, like OpenAI in its original charter, a company is serious about its social purpose and stakeholder welfare, “it cannot rely on traditional corporate governance, but it must constrain the power of both investors and executives”.
Catch the drift
Secondly, he notes that a “creative” governance structure will “struggle” to contain the profit motive, something economists Oliver Hart and Luigi Zingales call “amoral drift”.
“Perhaps it is possible to design a waterproof solution to avoid the amoral drift,” writes Tallarita. “So far, however, no corporate planner has come up with one.”
The article also observes that it will not be enough for governance structures to focus simply on “independence” from executives and investors. They will also need mechanisms that “encourage directors to pursue the social goal”. This might require external scrutiny of board decisions and socially designed “incentives”.
Next, Tallarita focuses on how governance might “align” profit with safety. “An alternative route is to try to make AI safety profitable. The best hope for the private governance of AI safety (if such a thing is achievable at all) is to strike an alliance with the profit motive.”
Lastly, board composition should become a “top priority” at AI companies. The key element they need is “cognitive distance”—the difference between the way AI developers and outsiders see the risks inherent in AI. Counterintuitively, the board that fired Altman may have contained too little distance for effective decision-making because it contained many AI experts.
The new board has more mainstream business people and, likewise, may possess too little distance.
“These companies,” writes Tallarita, “should strive for greater cognitive distance than more conventional companies and their boardroom norms should aggressively reward time commitment and robust open-minded discussion.”
Events at OpenAI were a shock to the world, given the long-flagged risks associated with artificial intelligence. Tallarita’s article highlights more than any other that managing AI is as much a governance concern as it is about ethics or sheer computing power.