The AI Safety Summit at Bletchley Park may be focused on the developers of artificial intelligence but the declaration issued by attendees warns all companies intending to use the technology that they have a role to play in managing the risks from the new technology.
The Bletchley Declaration, signed by representatives from 28 countries and the European Union, says AI should be “used for good”, as well as being “inclusive” and “governable”.
But it also makes clear that all sections of society will play a part in the safe use of AI. “All actors,” the declaration says, “have a role to play in ensuring the safety of AI: nations, international fora and other initiatives, companies, civil society and academia will need to work together.”
The inclusion of “companies” in the statement is expected to prompt downstream corporate users to consider how their application of AI in business models will be supervised and applied without bias.
‘Landmark achievement’
UK prime minister Rishi Sunak described the declaration as a “landmark achievement”, while technology secretary Michelle Donelan said it was a “global effort” to ensure AI’s “safe development”.
The declaration comes just days after the Corporate Governance Institute (CGI) warned that boards failing to stay on top of the risk and opportunities from AI would either find themselves falling behind competitors or handling reputation damage.
According to Peter Swabey, policy and research director at CGI: “Boards will need to develop a governance framework for AI that sets out clear roles and responsibilities, as well as policies and procedures for managing AI risk and opportunities. This framework should be regularly reviewed and updated to reflect changes in the business and the AI landscape.”
Observers speculate that there are ways for downstream users of AI to act on the Bletchley principles, with policies mainly focused on transparency about where AI is used and for what purpose, and the integrity of the data underlying AI decision-making. Another consideration is accountability and processes for challenging AI decisions as well as ensuring AI outputs are fair and unbiased.
Parse muster
Jamie Lyons, a technology expert with accountancy body ACCA, underlines transparency as a critical issue, alongside privacy and a legal framework covering liability for AI outcomes.
“To navigate this complex landscape, individuals and organisations must understand and proactively manage these risks.
“Transparency will only be achieved if the policies and strategies of the organisation are designed to ensure accountability and good governance.”
The UK’s Institute of Business Ethics has issued guidelines on AI ethics, but also warned that chief executives would be putting themselves “at risk” unless they appointed specialists to advise on the ethical use of AI.
Ian Peters, director of IBE, said: “AI can be a powerful tool for business, supporting planning, enabling productivity and profitability.
“But it comes with risks, and companies that fail to put steps in place to ensure they use artificial intelligence ethically will face difficulties in the longer term.”
Existential angle
Some critics worry about the narrow agenda of the Summit, with many concerned that its focus on existential threats from AI—in particular those from so-called ‘frontier’ AI (technology still under development and potentially far in advance of systems currently available)—meant it overlooked other concerns, such as the impact on employment.
Academics have argued companies have a responsibility to consider the issue. Writing for Board Agenda, Henley Business School professors Andrew and Nada Kakabadse worry about AI’s impact on jobs that require lower levels of education or those in industries sensitive to AI implementation.
“Stakeholders need to be confidently informed and sure about the consistent quality and outcomes of AI decisions that can affect their everyday lives and wellbeing.
“The wide-ranging adoption of AI in business is still in its earlier stages and boards have to remain ahead of the next grand steps.”
AI is here to stay and there are risks to be managed. Boards and their CEOs will have to take them seriously. Given the pace at which the technology is now developing, there is no time to waste.