Have boards come to terms with the rise of artificial intelligence? It seems not.
New research suggests large companies need to have two layers of AI governance if they are no longer to only supervise its use but also exploit its advantages.
The researchers—Maria Lillà Montagnani of Stanford University and Maria Lucia Passador at Harvard—conclude from their study: “AI can indeed play a key role in corporate boards, but it also creates significant risks which can only be properly addressed if the corporate structure is designed to cope with a more extensive use of AI.”
Their research comes at a time when AI has become a key issue for boards. A recent Board Agenda AI survey, in association with professional services firm Mazars and business school INSEAD, found that 73% of respondents say their companies will implement an AI project in the next 12 months.
Meanwhile reports out this week suggest the global artificial intelligence market will be $390.9bn by 2025, up 49% on 2019. A report from tech analysts Gartner concludes the pandemic derailed few AI plans. Just 16% of businesses globally say they have “suspended” AI investments while just 7% say their intentions have been scaled back.
Tech committees
This latest research by Montagnani and Passador examines the use of tech committees at big corporates, in both the US and Europe, to mine information on their approach to AI. It uncovered some intriguing results.
Firstly, and against expectations, tech committee members seem to be older. The average age in North America is 74 and in Europe 62 compared with average ages on other committees of 63 and 60 respectively.
This, the researchers conclude, is an indicator of attitudes towards tech, because such committees “deal with strategic issues rather than with ‘new technologies’ as one would presume from the committees’ name”.
But then the investigators examined committee charters to uncover their tech activities finding they are involved in strategy, monitoring, risk management, security and innovation. A further analysis showed the committees were mainly “engaged in typical board duties” such as monitoring and strategy with only a “minor role” in exploiting the potential of AI.
That leads the researchers to conclude “tech committees do not seem to be ‘interested’ in understanding the functioning of AI systems, nor exploring their capabilities with respect to specific corporate needs.”
The results suggest therefore that two layers of tech governance may be required: one operational and one to monitor and consider ethical issues to ensure “virtuous AI usage”.
Board skills and training
Board Agenda’s research not only found widespread plans for implementation of AI but also confirmation that boards play an essential role: 72% greenlight AI decisions.
Somewhat alarming though is the discovery that just 25% of boards have examined skills and training required by AI, while only 23% have assessed the implications for governance and compliance.
According to Asam Malik, head of technology consulting at Mazars, and Anish Venugopal, head of data and automation, there currently appears to be an AI disconnect. “Recognition of the value of AI at board level is significant but as we have found in our report, recognition, understanding and a plan of implementation are very different challenges,” they say.
Kamal Bechkoum, a professor and AI expert at the University of Gloucestershire, wrote in an article for Board Agenda last year: “To take full advantage of AI businesses need to place it at the very centre of their operations. A key starting point is to ask which tasks within an organisation are most suited to benefit from AI.”
AI is on the rise. But if the latest research is corrects board are still to adjust to its demands with the right governance.