Whether we like it or not, technological advances are reshaping the way companies do business. Artificial intelligence, which involves the processing of huge amounts of data by machine, is becoming more and more commonplace, creating both opportunity on a grand scale and a daunting level of risk.
Because boardrooms are the place where opportunity and risk are assessed and judgements made, technology is now on the governance agenda. Directors who have shied away from tackling the subject in the past have less and less excuse for ignoring it, but the question is: where and how to begin?
At the Institute of Business Ethics we have been trying to work out some of the answers to the complex questions posed by the increasing use of AI. This article offers some tentative conclusions based on some core observations and principles. These are:
- Data belongs to the person to whom it relates.It has an economic value which may be increased, sometimes in unexpected ways, by amalgamation with the data of others. An important principle enshrined in Europe’s new General Data Protection Regulation is that when companies use data in this way, they must respect the rights of the original owner.
- Access to data confers an information advantage on the person or entity obtaining that access. This may create a conflict of interest, for example when it allows firms to exploit the vulnerability of their customers or their employees without their knowledge.
- Human accountability is vital. Machine learning may lead AI to deliver decisions with an accuracy and complexity that defy mere human skills, but there must always be human accountability at the end of the process and steps taken to manage conflicts in a way that is fair and generates trust.
- Boards are responsible for providing this accountability. They do not need to understand exactly how AI works, but they need to consider the implications of what it sets out to do and be prepared to address the risks. For this, most will need reliable technical advice from people they understand and trust.
- In many cases there is a need to draw a line. For example it may be perfectly acceptable to price airline tickets according to overall demand at a specific time of day, but it is not acceptable to track a passenger’s booking habits to push up for that one individual the price of tickets he or she is likely to buy. Airlines that do that and are caught out face big reputational risk.
Risk appetite
These principles demand that boards make some important decisions; but while these decisions require understanding of the technology and the issues involved, they are normally ethical and philosophical in nature rather than technical.
The best context for them is as part of regular discussion on risk appetite and risk management. Take the question of driverless cars. It may well be true that the technology around driverless cars is already—or soon will be—sufficiently developed to lead to an overall improvement in road safety, even though there are still plenty of manoeuvres they are not equipped to do. Automated cars are not driven by people who are drunk or, like most of us, subject to momentary lapses of attention. Yet the overall improvement in road safety would be obscured by the occasional disastrous accident that could be attributed to failure of technology.
For businesses the critical question is how much risk they are prepared to take to secure the opportunity. In different ways, this question will be played out across sectors, and the key judgement relates to risk appetite.
Consider insurance. The insurance industry can now use information, often compiled by independent data brokers using mostly publicly available sources of information, to sharpen dramatically its estimate of risk. Social media postings related to drinking habits may be a good predictor of driving risk and therefore affect premiums. The ability to analyse and price risk with these techniques can give insurers a commercial advantage.
Yet there is also a reputational risk of being seen to be too intrusive and abusing personal privacy. Moreover, if individual risks can be measured in this way, the traditional insurance model of pooling risk no longer works. Why insure somebody’s life if AI tells you they will die at the age of 42 and six months? Why take out life insurance if AI predicts you will live to 103? For insurance companies these are existential questions, and there is little doubt that, over time, they will change the face of the industry.
Different perspectives
To steer their way through these ethical questions, boards need a great deal of wisdom and experience. Directors need to be alert to the issues but cannot simply rely on technology experts to give them the answer. A key skill is to bring together different perspectives to give a viable answer.
For example, what should a company go about restoring its systems after they have been brought down by a hacker? At that stage the tech teams are likely to be extremely risk averse. They will want to know that the defences are now watertight. But the marketing teams will be eager to resume normal business. Indeed, the whole company may be at risk if it cannot do so. The judgement for the board in this case is one of business risk, though they will be helped to make it if the two teams can talk to each other in language both can understand.
The IBE findings are that, while technology and the use of megadata must be on the board agenda, the context in which they are considered should not be overspecialised but more generally folded in to the discussion about business and reputational risk.
It is worth emphasising reputational risk, particularly for companies with a strong consumer focus. Trust can be destroyed very quickly by a failure or abuse of technology. Systems that are overly intrusive, biased or which set out to exploit vulnerability are likely to result in reputational damage. In 2017 The Australian newspaper reported that Facebook had mined user data to reveal teenagers’ emotional state for advertisers. Despite Facebook’s denials, the article provoked public outrage against the company, which is facing a constant global struggle to maintain its reputation.
True competitive advantage comes not from using AI to extract value from customers, but in delivering value to them. As Ginni Rometty, CEO of IBM, says, companies are trusted not just by how they use data but whether they are trusted stewards of other people’s data. Those that consider and respond to the ethical challenges are more likely to be trusted. And those that are trusted are more likely to survive and prosper in the long run.
Governance has a key role in building these reflections into business models. If boards succeed in doing so they will be creating a framework in which new technology can be used to everybody’s advantage.
Peter Montagnon is associate director at the Institute of Business Ethics.