Companies are in need of a new approach to coping with the risks presented by machine learning (ML) and artificial intelligence, according to academics, which should include sharing the governance role with stakeholders.
According to legal experts, despite eagerness among businesses to adopt machine learning (an application of artificial intelligence which sees machines harvesting data to work out optimal decisions on their own) “corporations seem to be slower to recognise the need to manage the risk of deploying ML systems”.
Iris Chiu, of UCL in London, and Ernest Lim, of the National University of Singapore, propose in a new paper that companies need to take a “corporate responsibility” approach to ML because of the unique risks that it presents. They call this a “thick and broad” approach.
“Such a framework should integrate corporations private interests and the public aspect of their power and citizenship. So that the use of ML is integrally located within business-society relations,” they write.
But in identifying the governance steps needed to keep ML under control, they write that stakeholders should play a central role.
“There should be proactive engagement on the part of companies rather than waiting for complaints to arise,” they write.
“Companies should be willing to treat their stakeholders, both internal and external, as potential gatekeepers to co-governing the development and use of innovation such as ML.”
ML risk and governance
It’s not hard to see why the pair would choose corporate responsibility and a stakeholder approach to ML governance. The use of machine learning to process data has the potential to produce decisions riven by bias. There have been concerns that ML decisions have been racist and sexist; some facial recognition software has found it easier to deal with white skin and male faces.
San Francisco was the first US city to ban the technology in 2019 while recent reports suggest the European Union is working on new legislation to restrict the use of high-risk AI technology.
Chiu and Lim identify a number of risks including legal liability risks, the possibility of infringing regulation, or reputational risk when dealing with “stakeholders or communities in possibly disorientated or frayed relations”.
They suggest ML governance become an issue for senior management and the boardroom. They point out ML should be considered a “matter of culture”.
ML governance should also be “enterprise-wide” rather than leaving technology decisions in “siloed departments”. They also suggest the issue needs proactive management and more disclosure, at least on a voluntary basis, about the way ML is used inside companies.
But the inclusion of stakeholders in governance decisions remains the most attention-grabbing recommendation.
Ethics, governance and board skills
Chiu and Lim follow a host of others to warn about the ethics and governance of artificial intelligence. Other recent research produced recommendations that AI requires two levels of governance: one to oversee operational issues, and the other to look at ethics. Writers Maria Lillà Montagnani of Stanford University and Maria Lucia Passador at Harvard—conclude: “AI can indeed play a key role in corporate boards, but it also creates significant risks which can only be properly addressed if the corporate structure is designed to cope with a more extensive use of AI.”
Meanwhile, boardroom skills may be an issue. A survey by Board Agenda found that while companies are enthusiastic about using AI, boardroom know-how may be lacking. More than half the survey respondents (53%) claim their boards are not sufficiently skilled or knowledgeable about AI and its implications for business and industry.
Despite its presence almost everywhere AI remains relatively new technology. And while the tech may be seductive, the values and ethics around its use is still developing. Addressing the governance issues would be a step forward.