The European Union’s new rules on artificial intelligence (AI) are due to come into force imminently after the bloc’s countries endorsed a political deal reached last December.
The EU’s AI Act is a watershed development that sets a potential global benchmark for technology that is being increasingly adopted by both consumers and businesses. It will govern the future use of AI, including governments’ utilisation of AI in biometric surveillance and how to regulate generative AI (genAI) systems, such as ChatGPT.
It seeks to foster the development and uptake of safe and trustworthy AI systems across the EU’s single market, whilst also aiming to ensure respect of fundamental rights of EU citizens and stimulate further investment and innovation in new technologies. It’s quite a task—and highlights the rapid development of AI, the opportunity, and the concerns that surround it.
It’s more comprehensive than current legislation in other markets. The US, for example, has so far adopted a light-touch voluntary compliance approach, although President Biden issued an executive order requiring AI developers to share safety results with the government. China, meanwhile, has introduced a patchwork of guidelines in recent years, aimed at maintaining social stability and state control.
It will now be interesting to see how other markets react to the ratification of the EU’s AI Act, and whether they will follow suit and roll out further legislation. But is this what firms want?
Mixed reactions
The new law will classify products according to risk and adjust scrutiny accordingly. The higher the risk is considered to be, then the stricter the rules will be. Once the act comes into force, provisions will start taking effect in stages, with EU member countries required to ban prohibited AI systems within six months.
These provisions include steps to tackle the risks posed by chatbots and other gen AI tools that financial services firms are using to enhance user experience and boost customer service levels. The producers of the underlying AI systems will be mandated to be more transparent about the material used to train their models in order to comply with EU copyright law.
Firms operating throughout Europe will be carefully considering how they can best comply with the new legislation—and they have mixed feelings about it.
Broadridge’s latest Digital Transformation & Next-Gen Technology Study shows that AI regulation is dividing opinion among European financial services firms.
While nearly a third (31%) agree that AI should be tightly regulated due to the underlying risks, more than half (54%) operating at the global level believe financial firms should be allowed to self-regulate their adoption of AI. More than two-fifths (43%) of European firms also believe technological innovation is moving faster than regulators can keep pace with.
While the act’s requirement to integrate mechanisms for greater human accountability and oversight into AI processes makes sense, some worry that its prescriptive—rather than adaptive—nature could restrain future innovation and the further evolution of AI technologies. This helps to explain the preference that many firms seemingly have for self-regulation.
Another worry is that heavy regulations often favour established incumbents with the resources, such as extensive compliance teams, that are required to navigate complex legal landscapes. Could this hinder competition by creating new barriers to entry for fintechs and other startups?
Getting your house in order
Given that almost all the firms we surveyed in Europe (94%) are investing in AI to some extent, the AI Act is certainly going to have a significant impact when it’s rolled out.
Now is the time for boards to be taking a much closer look at the current governance frameworks and controls they have in place, and to identify any gaps that will need addressing in order to become fully compliant with the new legislation.
Three in four (74%) European firms believe that AI will lead to a significant improvement in customer experience—but they also need to remember the importance of clear communication, especially at a time when new regulations are being rolled out and introducing new considerations.
Our recent CX & Communications Consumer Insights Report reveals that whilst 33% of consumers say AI has improved their overall experience, it’s also spurring some new concerns.
AI is an extremely powerful new technology and its capabilities will accelerate further, while its potential for disruption is massive. It’s therefore natural that many consumers have expressed some degree of trepidation and worry about the loss of human interaction. Nearly two-thirds (65%) note that AI lacks a sense of empathy they consider valuable in company communications, while 53% don’t yet trust AI in the communications they receive from companies.
Responsible implementation will help to alleviate such worries, as will using AI to augment—rather than replace—human talent.
Eight in ten (82%) consumers want firms to increase transparency about their plans for user data, so ensuring that your company is committed to data security, and being transparent about relevant practices whilst complying with the new regulation, will also help to ease any hesitation from consumers regarding the use of AI tools.
The EU’s AI Act has received mixed reviews from financial services firms. It provides some initial insights into how governments are beginning to view AI, but can these regulations really keep up with the pace and innovation that AI can deliver? That remains to be seen.
By working together with financial services firms and sharing collective insights and common frameworks, governments and regulatory bodies may be able to arrive at the right balance.
Tom Carey is president, global technology and operations, at fintech Broadridge International.