In January, the chief constable for West Midlands Police, Craig Guildford, retired and has been referred to the police force watchdog. It emerged that fake information sourced through artificial intelligence (AI) had played a part in the force’s advice, leading to a decision to ban Maccabi Tel Aviv football fans from a Europa League match against Aston Villa.
This is one example in a slew of cases in which AI has been misused and exposed. Other examples of AI manipulation include fraudulence, distortion, misrepresentation through deep fake videos, cloning, alteration of voice or appearance, and editing of statements. Leaders are increasingly having to ensure their authentic reputation and trust is protected in a digital, algorithm-driven, social media constructed world.
The use of AI has gone beyond a simple technological tool to help human work and decision-making. It is changing the way leaders interact and reach discretionary judgment. More recently, AI self-learning capabilities are outpacing humans in their ability to efficiently analyse large amounts of data. Smart technologies can rapidly reveal predictable scenarios, sense behavioural patterns and propose likely outcomes. AI is exposing cognitive and emotional dependencies within a leader’s oversight in decision-making.
Ultimately, leaders are held accountable and are open to scrutiny and interrogation. Investing time and effort in finessing their ‘tech-intelligence’ must therefore become a priority—their personal and organisational legitimacy depends on it.
To help leaders protect their integrity, here are six principles to support responsible and transparent AI use in decisions:
1 Open collaboration
Leaders must be seen to be evolving their own personal awareness and understanding of AI. This cannot just be delegated to the IT department or risk function. Leaders need to be familiar with and comfortable discussing the implications and impacts of emerging AI. They should promote a ‘no such thing as a dumb question’ culture that positively facilitates shared conversations on AI within executive and board level decision-making. Burying heads in the sand or succumbing to a fear of the unknown is unacceptable. Whilst it is the leader’s decision, it should also involve seeking out trusted colleagues’ opinions and views.
2 Intelligence architect
Leaders must clearly position and communicate the acceptable behaviours and application of AI eco-system within the organisation and across its stakeholders. This is about investing proportionately—and regularly—to enhance governance and competitive AI design, and its reach of data sensitive assets. Leaders have to set the conditions and boundaries of their organisation. As leaders monitor and evaluate teams, so too should the AI footprint be on their radar.
3 Validation and corroboration
Leaders must monitor the quality of knowledge being presented to them and question the reliability of the sources. This includes understanding how AI and related technologies have been involved in collating and presenting datasets. A leader needs to be able to interpret evidence, and the attributes used in the formation of evidence.
Motives and agendas, manipulations, errors, biases and intellectual ownership: these are all factors that could distort information, regardless of AI involvement. Leaders should intervene and corroborate multiple sources of evidence in forming consensus. The role of a leader has always been to intuitively question assumptions, challenge rationale and evaluate contextual settings. This supports high quality decision-making by ensuring it is based on accurate knowledge.
4 Key keeper
Leaders must have a custodian mindset. Executive and board members’ actions are equitably accountable for preserving and guarding the material privacy, flow of information, level of access and ethical conduct for engaging AI. Rigorous processes need to be put in place to monitor the level of access to data and outline ethical conduct for engaging with AI.
Leaders should shape how AI is used, not be shaped by it. Security works best when everyone shares responsibility, so leaders need to build a culture in which people think about security first. Leaders have to respect each other and guide that where AI cannot, by saying when something is harmful, acknowledging that which is invisible, and by having the courage to say no. It’s the leader who decides how and when AI is used.
5 Policies and protocols
Organisations must adhere and comply to industry and regulatory standards. AI policies need to align across all levels and extend to suppliers, customers, third parties and AI providers. AI risks present a moving target—they are constantly shifting—so it is vital to review and stay up to date with AI developments.
The dark side of AI is becoming more sophisticated, as shown by recent cyber attacks. In this regard, all members of the organisation are leaders and must protect the organisation. Clear policies are essential. Teams must know how to respond to potential situations, what protocol to follow, and who to contact in an emergency. This is the heart of AI governance.
6 Self-awareness
Leaders must provide time for self reflection. The top team need to understand themselves and each other: self-awareness reveals personal strengths and limitations, biases, over-confidence, blind spots, power and politics, and emotional attachments. Building in reflection creates opportunities to zoom out and zoom back in. With AI, the line between professional and personal can become blurred. Leaders’ personal online presence also needs to be assessed: publicly-displayed profiles should be regularly refreshed and protected.
AI governance is all about ensuring intervention through human oversight: leaders need to act as sounding boards, moderators, challengers, supporters, testers and evaluators. Leadership is often described as lonely, but AI demands more collaboration and empathy from executives and boards. Human judgment is now more important than ever.
Nadeem Khan is programme director of the MA in Board Practice and Directorship at Henley Business School.



