Artificial intelligence (AI) is an increasingly prevalent part of many companies’ day-to-day operations. While AI promises significant bounties, it brings with it a raft of potentially significant legal and ethical risks, such as liability and reputational harm arising from misuse or loss of data.
This article focuses on one specific aspect of AI: its frequent reliance upon the processing, at significant scale, of personal data. We will look at what senior management need to know about the personal data risks raised by AI and the increased focus of the Information Commissioner’s Office (ICO) on those risks.
AI, GDPR and data privacy risks
Since the advent of the General Data Protection Regulation (GDPR), risks arising from privacy issues have been brought into focus, whether these relate to data security breaches, regulatory actions and fines, or claims from data subjects.
In light of this, it is more important than ever for senior management to be up to speed with the latest guidance from the ICO.
The ICO’s guidance on AI and data protection helps organisations interpret the application of GDPR to the use of AI systems, as well as having application to the use of statistical models that do not involve AI.
One of the key themes of the guidance is that directors and senior management should be accountable for understanding and addressing the complexities inherent in the use of AI in respect of data protection compliance and take the following steps:
Ensure diversity and resource: While increasing the skill base of senior management is always helpful, management teams should also ensure that they are advised by diverse and well-resourced teams. The resource and investment allocated to AI should be proportionate to and evolve with an organisation’s use of and reliance on AI.
Engage with DPIAs early on: One of the key accountability tools under GDPR is the use of Data Protection Impact Assessments (DPIAs). The AI guidance noted that using AI to process personal data is highly likely to require a DPIA under GDPR. Preparing a DPIA at early stages of project development is crucial to its effectiveness. DPIAs for AI should consider potential bias or inaccuracy in the algorithm or data sets and the potential detriment to individuals.
Know your responsibilities as controller/processor: Under GDPR, organisations which determine the means and purpose of personal data processing activities are controllers and will bear more responsibility. When engaging with new AI service providers, organisations should not assume that the service provider would always be processors. Instead, due diligence should be carried out to understand whether the service provider will utilise customer data to improve its own algorithms. Such processing is likely to mean that the service provider also acts as an independent controller, which requires different due diligence, contractual considerations, and apportionment of liability.
The GDPR principles of fairness, lawfulness and transparency also continue to apply in the context of AI. Senior management will need to have policies, processes and people in place to address these issues.
Mitigating biased and discriminatory results: Human bias embedded in the training data, the algorithm design and the way AI is deployed can all create biased and discriminatory results. The ICO’s AI guidance requires senior management teams to have sufficient understanding of the different approaches in existence to mitigate such bias and discrimination, understanding their limitations and advantages and sign off the chosen approach. Some of the mitigation steps are also likely to be required by the Equality Act 2010 as “reasonable adjustments”.
Explain your use of AI: The transparency principle of GDPR requires organisations to inform individuals how their personal data are being used. Explaining how AI systems process personal data is not only challenging due to the complexity of AI systems, but also the potential security risks and risks to confidential information that could arise from disclosure of certain information. Trade-offs of these competing interests should be considered as part of the due diligence process.
Automated decisions: Where an organisation is unable to explain the output of an AI system and will only agree to the system’s recommendation in practice, the processing activity becomes “solely automated processing”. GDPR requires organisations to allow individuals to request human intervention if solely automated processing may have legal or significant effects on individuals, which can increase the cost associated with operating the AI system.
In summary, the ICO’s AI guidance requires senior management teams to be aware that data protection risk posed by AI systems is most effectively managed if it can be assessed and mitigated at the design stage. While there is no requirement for all senior management members to be experts in AI, the guidance does require senior management teams to have sufficient conceptual knowledge of the different types of AI risks in order to have meaningful discussions within the organisation and sign off policies and decisions to mitigate such risks.
Personal liability of senior management
It is also worth noting that under the Data Protection Act 2018, while most of the liability under GDPR would fall on a data controller, directors, officers and managers can be personally prosecuted if a data protection offence is committed with the director’s “consent, connivance or neglect”.
These are terms of art, but in essence it means personal liability will attach where the company has committed an offence and: (i) the directors knew that the company was performing the relevant offence and either approved it (consent) or took no acts to stop it (connivance); or (ii) the directors did not know an offence was being committed but had enough knowledge to put them on enquiry (i.e. to ask more questions) (neglect).
What amounts to neglect will be fact-sensitive to each case, but an important consideration is how closely connected the directors’ roles are to the offence that was committed. This aspect should be of the most concern to directors, particularly those charged with responsibility for data compliance, as it means that liability can arise even where there is no malicious intent but there has been a failure of management and supervision attributable to the director.
For example, a company is found criminally liable under section 173 of the DPA when it is revealed that the consumer data team had altered personal data in order to prevent it from being disclosed in response to an access request from the data subject. The director(s) responsible for oversight of the data team could face criminal investigation and prosecution if they had day-to-day involvement in the team but there were very few mechanisms or controls in place to identify and prevent such offences occurring.
Increasing attention from regulators
GDPR only regulates processing of personal data by AI systems and is by no means the only relevant regulatory regime. As AI systems continue to become more integrated into our lives, more software and AI is likely to be caught by product liability regimes, especially those that are likely to cause physical bodily harms.
For example, in the regulated medicine and healthcare sectors, upcoming regulations are already categorising certain software as medical devices which will be subject to more stringent regulation. Senior management therefore also need to be aware of what their industry regulators, as well as the ICO, are doing with respect to AI.
In summary, directors should keep updated about the use of, or intentions toward, AI by and from their business, and should be able to answer the following questions (likely with assistance from the company CTO):
- Do you know where you use AI within your company, what it is used for and how it is used?
- Who do you rely on both within your company and externally to procure, manage and maintain the AI you use? Are their roles and responsibilities clear, distinct and regularly reviewed?
- How do you monitor the performance of the AI you are using? How, when and by whom is this reported to the board?
Matthew Walker is a partner, David Varney is a director and Tom Whittaker is an associate at independent UK law firm Burges Salmon.