Noise surrounding the corporate use of data for potentially unethical purposes has increased by dozens of decibels in recent weeks.
During the summer, the Competition and Markets Authority (CMA) hosted a symposium, during which consumer vulnerability was discussed, highlighting concerns that older people, or those with mental health issues, or workers on low incomes, face disadvantages in dealing with product and service providers in an increasingly digital world.
These types of customers may stay with service providers longer and struggle to shop around, paying what has become known as the “loyalty penalty”.
While the use of third parties—especially websites—to help customers find better deals has opened up the banking and energy sectors, CMA chief executive Andrea Coscelli recently suggested that principles-based regulation may have a “potential role” in managing the welter of customer information that businesses have—with principles set against abusing that data.
Two days after Coscelli’s comments, the CMA announced it would investigate a “super complaint” by Citizens Advice about loyalty penalties. Another two days later, the Financial Times reported that business secretary Greg Clark wants the competition watchdog to look more broadly at whether businesses use data to unfairly take advantage of customers, leading to “exploitative and abusive outcomes”.
Key questions
There are key questions. How do companies approach ethics when it comes to the treatment of its customer base; and how much control will businesses give machines, through artificial intelligence (AI), to make decisions about customer treatment? Could they create nuances that stop a “robot” from squeezing every last penny from consumers, particularly vulnerable ones, without due care?
For Peter Montagnon, associate director at the Institute of Business Ethics, cybersecurity may be at the front of boards’ minds but on AI risk it seems they are “being quite slow” to catch up.
However, for Montagnon, the option of an “AI advocate” on a board is one that takes too narrow a view of the issue.
—Peter Montagnon, Institute of Business Ethics
“On balance it does not seem right to appoint a single AI advocate or representative as boards could come to rely too heavily on the expertise of one person, when the issues thrown up by AI are ones for which the whole board should be responsible.”
He would rather it not become seen as a tech issue, but kept in the sphere of governance, transparency and duty of care, while being cognisant of technological developments and the risk and opportunities those developments present.
“Boards do need a certain level of expertise and they also have to remember that they are entitled to seek outside advice. Increasingly, however, all directors should be sensitised to the risks and opportunities around AI.”
He adds: “You do not, by the way, necessarily need a high level of technical expertise to recognise and address the issues.
“Indeed, an ability to stand back and see the broader context can be helpful. One concern we have is that all of this ends up in the ‘too difficult’ box and nobody addresses it.”
The institute is encouraging companies to consider AI and “big data” when it comes to their internal codes on ethics and behaviour.
However, Montagnon is wary about “wholesale” regulation in an area that is still developing.
“Consistent application of ethical principles is likely to yield better outcomes for the time being,” he concludes.