The enormous problems and opportunities posed by generative artificial intelligence (AI) continue to raise significant issues for boards as we shift into 2024.
AI devices can produce text, images and art in an instant, bolstering organisational creativity and productivity while, at the same time, threatening to unleash mass disinformation to internet users.
OpenAI’s ChatGPT, as an example, has the capacity to fabricate plausible news and even academic essays, but it can also mislead, by making up data and comment that are unwittingly introduced as fact.
In addition, leaving data in the care of OpenAI can prove a problem. In March 2023, a security breach resulted in some ChatGPT users seeing conversation headings that weren’t theirs in the tool sidebar.
Any tech company accidentally sharing users’ chat histories faces a serious breach, particularly as ChatGPT had 100m monthly active users in January 2023 alone. While the bug was patched, the breach led to Italy banning ChatGPT and demanding it stop processing Italian users’ data.
The inescapable conclusion is that AI’s introduction into the corporate sphere is increasing governance complexity, as directors and boards are forced to engage with digital technologies.
AI challenges for boards
Existing advice on AI being offered to directors is so general that it borders on the meaningless, begging the question, ‘What must boards understand and act on when it comes to generative AI?’
In the first instance, leaders need to define and adopt a clear and accountable action plan that incorporates AI into the organisation’s strategy and business model.
Directors should also recognise the positives AI can offer the organisation. These include faster decision-making, superior handling of multiple data inputs, lower exposure to process fatigue, reduced costs, efficiency gains and superior service delivery.
All of these operational factors are open to improvement by eliminating human mistakes that are induced by psychological or emotional factors.
AI enhanced management is ultimately realised through information accuracy and superior prediction power. There is a lot to play for here, as the universe of addressable customers is expanded exponentially under the progression of AI.
Board members should further accept the pitfalls of AI and establish sound controls to mitigate these. Even when ChatGPT is shown to be wrong, many still accept that correct information is provided. This makes it the perfect disinformation tool which can open organisations up to liability for any false information they distribute.
Service providers do not offer appropriate safeguards and tools and information validation is left to users’ detection skills.
Unfortunately for boards, anxiety can supersede conclusions that are rationally derived, particularly regarding how to best use technological advances.
Often, too much emphasis is placed on educating users about technology, while insufficient attention is given to the governance and strategic implementation of AI adoption.
This is a particular problem, given the fact that AI tools continue to communicate more fluently and authoritatively.
A four-step plan on AI adoption
The way forward for boards is to determine the value and methods of realising the benefit that AI offers. This presumes that both management and the board have the insight and capacity to generate organisational advantage.
Technology adoption is only possible if a data-driven strategy shares the information sourced from the algorithms.
This data availability is crucial but, to date, too many directors lack an appreciation of the opportunities brought by AI, and also allow their disquiet to override learning possibilities. To address the core issue, focus on the following:
• Competitive advantage. In their oversight of the organisation, boards should scrutinise and deliver clarity on competitive advantage. How can technology offer clear advances, at what cost, and how will this provide clear differentiation against competitors?
• Risk. Take into account the issue of risk. Which vulnerabilities could undermine the organisation if technology is positively adopted or actively rejected?
• Reputation. To what extent can reputational gain or harm arise through the use or lack of AI implementation?
• Value. On balance, what value is gained from technology adoption, keeping in mind the analysis of competitive advantage, risk and reputation?
These are not new concepts. Critical, however, is pursuit of a disciplined, evidence-based approach to strategy generation and delivery. This results in shared and meaningful conclusions concerning competitive advantage, risk, reputation and value.
By pursuing strategic clarity, the board’s remit requires consideration of where it can find trusted evidence. Certainly the C-suite is one route, but the general management one level below should also be factored in.
For any strategy to work, the general management requires a number or target to achieve. What they do is realise targets and ensure operational cohesion. Their knowledge of the local markets and communities under their responsibility is key to achieving goals.
What general management can’t do is translate concepts into operational tasks. This approach would result in strategic distortion.
The conclusions reached by the board and C-suite on competitive advantage, risk, reputation and value need to face the scrutiny of the general managers responsible for strategy implementation. They have the necessary insight into what works or doesn’t.
An effective organisational approach to AI essentially comes down to the 20/80 rule, with 20% of effort being put into strategy creation, and 80% into strategy delivery. To make strategy work, listen to the experience of the general management regarding the impact of any initiative on their markets.
In a punitive culture, general management learns not to offer insightful comments. They just provide what top management wants to hear because this makes everyone’s life easier.
Despite the many concerns accompanying the introduction of AI, the overall gross value add (GVA) to the UK economy from AI-specific businesses is currently estimated as £9.1bn by the CBI. This is equivalent to 0.5% of the UK’s total GVA.
In effect, the advances offered by generative AI technologies are far in advance of the tools that should be protecting against its illegitimate use. The global fraud prevention market related to AI was $30bn (£23.7bn) in 2021, and is forecast to reach $250bn by 2030.
It is further estimated that cybercrime is on track to be a $10.5tn business by 2025, which will make it equivalent to the world’s third largest economy, just behind China and the United States.
Oversight is crucial
The prime responsibility of the board is to provide meaningful oversight of the assets under their care.
Board directors do not have to be experts in generative technologies, or ESG, or even sustainability. What boards do need is to be deeply conscious of the effect of innovation and new developments from within or outside of the organisation.
Whether blockchain or environmental responsibility, expert input is needed to clarify the nature of the technologies or climate advances under discussion.
The relevance of those advances needs to be scrutinised and evidence-based, and viewed through the four disciplines of: competitive advantage, risk, reputation and value to be gained.
The board needs to be engaged with the two levels of management. In this way, it can unearth relevant evidence, hold informed discourse, and enhance and protect the organisation.
By integrating expert advice and the disciplined scrutiny of any innovation, the board drives thinking in a direction that can uniformly adopt ChatGPT, blockchain, cyber, ESG, sustainability and more.
In a non-scrutinising board, the danger is that these functions drive the oversight process, making the entire organisation unduly vulnerable.
We do not need a new generation of technology or environmentally expert-based board directors. We need board directors who clearly exercise their duties.
Andrew Kakabadse is professor of governance and leadership, and Nada Kakabadse is professor of policy, governance and ethics, both at Henley Business School.