OpenAI, the company behind the pathfinding ChatGPT, could be on the brink of overhauling its governance structures—doing away with its “nonprofit” elements—as a response to raising vast sums in new investment capital.
The reports come only nine months after founder and chief executive Sam Altman was controversially ousted from the company he helped to build, and then returned again, all within the space of a few days.
Reuters reports that Altman has held talks about turning its nonprofit structure into a “for profit” arrangement, as it also negotiates a financing round that could value the company at $150bn.
However, Reuters says the valuation may depend on changes to the corporate structure and governance.
OpenAI currently has a hybrid governance structure with an umbrella nonprofit company, OpenAI GP, running a for profit subsidiary: OpenAI Limited Partnership (LP).
Profits of the LP are capped. This was so the company could remain targeted on its core mission: AI that “benefits all of humanity”.
Humans’ rights
Business Insider writes that observers are worried Altman is “losing sight” of the company’s humanitarian mission. Reuters quotes a company source saying: “The nonprofit is core to our mission and will continue to exist.”
OpenAI caused a global sensation when it launched new versions of ChatGPT to the public in late 2022 and 2023.
In November last year, however, the company was to go through a crash when the board parted company with Altman, alleging his behaviour was “not consistently candid” over issues such as OpenAI’s search for profit.
The affair left governance experts around the world speculating on the governance structures required to keep AI safe and focused on a beneficial mission.
Elon Musk, owner of X (Twitter) and chief executive and founder of Tesla and SpaceX, launched and then dropped a lawsuit, claiming the company had abandoned its founding mission. At the time, Microsoft CEO Satya Nadella suggested AI governance had to change.
Public governance
Harvard academic Roberto Tallarita wrote in the Harvard Business Review that corporate governance was unable to handle the risks inherent in AI and that only governments were up to the task.
Tallarita wrote: “Even the most creative corporate governance innovations cannot be a long-term substitute for the public governance of catastrophic risks.
“While good corporate governance can help in the transitional phase, the government should quickly recognise its inevitable role in AI safety and step up to the historic task.”
One governance change that seems certain is Sam Altman’s departure from OpenAI’s safety and security committee. A revamped committee will now consist only of independent members.
The committee will be chaired by Zico Kolter, director of the Machine Learning Department at Carnegie Mellon University. Another appointment is Paul Nakasone, a retired US army general and former director of the US National Security Agency.
OpenAI says: “The safety and security committee will be briefed by company leadership on safety evaluation for major model releases and will, along with the full board, exercise oversight over model launches, including the authority to delay a release until safety concerns are addressed.”