The must-have Christmas gift—at least with my savvy friends—was a virtual assistant. Whether it was Alexa, Siri or Cortana, the future world of accessible artificial intelligence (AI) is in our homes and becoming mainstream.
The rapid development and evolution of AI technologies, while unleashing opportunities for business and communities across the world, have prompted a number of important overarching questions that go beyond the walls of academia and high-tech research centres in Silicon Valley, and into our boardrooms.
Governments, business and the public alike are demanding more accountability in the way that AI technologies are used, trying to find a solution to the legal and ethical issues that will emerge from the growing integration of AI in people’s daily lives.
AI by its very nature is complex and technical, so how can boards ensure they have the right knowledge and capabilities to ask the right questions when it comes to AI developments in their business, and to ensure that AI does not pose a future risk?
Considering the ethical dimensions of AI can help mitigate those risks, so to assist boards in their oversight the Institute of Business Ethics (IBE) has developed an ethical framework to aid those discussions.
AI technologies are not ethical or unethical per se. The real issue concerns the use that business makes of AI, which should never undermine human ethical values.
As a non-executive director, how do you engage with your business if you don’t feel confident that you know enough about the technology in question?
The IBE has worked with organisations and technology experts to identify the founding values that form the cornerstone for the ethical framework of AI in business. These are: accuracy, respect for privacy, transparency and openness, interpretability, control, impact, accuracy and learning.
Looking at the development of AI within their own businesses through the lens of this framework will assist boards in their decision-making, and help them ask the right questions of their business.
Companies need to ensure that the AI systems they employ can be relied upon to produce correct, precise and reliable results. It is critical that the machine-learning algorithms that drive AI decision-making are trained on diverse sets of data in order to prevent bias.
Because AI can learn from data gathered from humans, it might be that some human biases are reflected in the machine’s decision-making. This indicates how, even in the era of AI, influencing human behaviour to embed ethical values should remain at the forefront of every conversation about business ethics.
Respect of privacy
Machine-learning technologies have brought about new ethical issues related to the respect of privacy. The forthcoming GDPR (General Data Protection Regulation) enforces the principle that everyone has the right to the protection of personal data.
Transparency and openness
Open source material in computer science helps the development community to better understand how AI works, and therefore be able to explain it more accurately to the public and the media; this in turn will help improve trust. Recently Microsoft, Google, Facebook and Amazon have released much of their work to the public for free use, exploration, adaptation and perhaps improvement.
As AI algorithms increase in complexity, it becomes more difficult to make sense of how they work. The use of “black box” algorithms makes it difficult not only to identify when things go wrong, but also to determine who is responsible in the event of any damage or ethical lapse.
Interpretable and explainable AI will be essential for business and the public to understand, trust and effectively manage “intelligent” machines. Organisations that design and use algorithms need to take care in producing models that are as simple as possible, to explain how complex machines work.
Much public scepticism over the future of AI is fuelled by the fear that humans might lose control of intelligent machines, which would then prevail—and possibly wipe out humanity altogether.
To have full control over AI systems, it is important that both companies and algorithm designers only work with technology that they fully understand. Being able to explain the functionalities of a technology, of which they appear to be in control, is essential for building trust with employees, customers and all stakeholders. In addition, it minimises the risk of misuse, such as other parties taking advantage of it for personal gains.
Companies also need robust control of the system’s development process to ensure there is sufficient scrutiny and testing of algorithms for bias or misuse.
In an environment where new machine-learning technologies are created and developed at a fast pace, companies might be tempted to adopt them because they want to be ahead of their game and on top of the latest technological advancement, instead of really needing them or because they benefit their business.
Just because a company can use a certain AI technology, it doesn’t necessarily mean that it should. The Confederation of British Industry says that measuring the impact of AI is important to help companies avoid unnecessary costs and potential risks deriving from the use of inadequate or inappropriate technologies.
Measuring the potential impact that a new technology can have before adopting it can identify any undesired side-effects and consequent ethical risks.
Accountability is a central tenet in corporate governance; there should always be a line of responsibility for business actions to establish who has to answer for the consequences.
AI systems introduce an additional strand of complexity: who is responsible for the outcome of the decision-making process of an artificial agent? This is compounded by AI development being largely outsourced by companies rather than developed in-house. Machines, as such, are not moral agents and therefore cannot be held responsible for their actions.
Who should be accountable, then, when an AI system violates ethical values? Should it be the designer of the algorithm, the company that adopts it or the final user? It is difficult to provide a univocal answer, and consequently, a rich debate has flourished on this topic.
Although the question of responsibility remains largely unanswered, a valuable approach would be for each of the parties involved to behave as though they were ultimately responsible.
To maximise the potential of AI, people need to learn how it works and discover the most efficient and effective ways to use it. Employees and other stakeholders need to be empowered to take personal responsibility for the consequences of their use of AI, and they need to be provided with the skills to do so—not only the technical skills to build it or use it, but also an understanding of the potential ethical implications that it can have. This includes boards.
It is important that companies improve their communications regarding AI, so that people feel they are part of its development and not just its passive recipients—or even its victims.
And businesses must engage with external stakeholders, including media reporters and the general public, to improve their understanding of the technologies in use and ensure that they can assess more accurately the impact of AI on their lives.
Questions for the board to ask
• What is the purpose of our job and what AI do we need to achieve it?
• Do we understand how these systems work? Are we in control of this technology?
• Who benefits and who carries the risks related to the adoption of the new technology?
• Who bears the costs for it? Would it be considered fair if it became widely known?
• What are the ethical dimensions and what values are at stake?
• What might be the unexpected consequences?
• Do we have other options that are less risky?
• What is the governance process for introducing AI?
• Who is responsible for AI?
• How is the impact of AI to be monitored?
• Have the risks of its usage been considered?
Philippa Foster Back CBE is director of the Institute of Business Ethics.