The market for AI governance—in the form of software, services and hardware—will triple to $36bn globally in the next ten years, according to new research.
The news comes as separate research reveals companies are accelerating toward AI integration into their business models and processes, despite many corporate leaders failing to properly understand the governance risks involved.
The consultancy Exactitude says the global AI governance market was $12bn in 2024 but will grow by $24bn over the coming decade, as businesses rush to adopt artificial intelligence into their systems and process.
Exactitude says: “The growth of this market is being catalysed by technological progress in AI—especially in machine learning and natural language processing—as well as a growing global consensus that ethical considerations and transparency must be embedded in AI systems.
“AI governance has transitioned from a niche compliance topic to a central aspect of corporate digital transformation strategies.”
However, recent research by Big Four consultancy EY suggests managers are yet to fully understand the risks attached to AI use.
‘Concerning outlook’
EY says 51% of companies agree that it is “challenging” to develop governance for current AI technologies, while the “outlook” for emerging AI technologies is “even more concerning”.
Drilling down into specific technologies, only 58% of the executives polled could say they were “moderately or extremely familiar” with the risk associated with synthetic data generation, despite 88% saying they are currently using the tech or plan to do so.
Similarly, only 51% could say they were comfortable with the risk involved in “self-improving AI models”, while 72% were using AI currently or will do so soon.
EY also finds that executives appear to be misaligned with consumers in relation to AI worries. When it comes to accountability for “negative” AI use, 23% of executives see it as a concern, while a much larger proportion, 58%, of consumers, see it as a problem.
Likewise, 32% of C-suite leaders fret about security breaches in AI systems, while 61% of consumers are concerned.
Trust and confidence
Cathy Cobey, EY’s global responsible AI leader for assurance, says: “It’s not a ‘one-and-done’ exercise but a journey where your AI governance and controls need to keep pace with investments in AI functionality.
“Maintaining trust and confidence in AI will require continuous education of consumers and senior leadership, including the board, on the risks associated with AI technologies and how the organisation has responded with effective governance and controls.”
The research results echo those found elsewhere. In April this year, the Institute of Directors (IoD) published a report revealing that a lack of skills in UK boardrooms constitutes a barrier to full engagement with artificial intelligence.
Erin Young, head of innovation and technology policy at the IoD, said: “While UK business leaders in early AI adoption are enthusiastic about greater productivity and efficiencies, they face a complex set of barriers to top-down implementation and governance—from skills and expertise gaps at board level to a lack of trust and fundamental concerns about reliability, security and business value across AI capabilities, tools and applications.”
AI expert and academic Kamal Bechkoum recently wrote for Board Agenda that AI governance at the moment is “like playing Russian Roulette”.
Bechkoum adds the “real question isn’t where or how AI will change business operations—it’s whether leaders can harness AI responsibly to drive innovation and efficiency without eroding trust, amplifying bias, or exposing their companies to unforeseen risks”.
AI’s rate of development is without precedent. However, it is becoming clear that the evolution of governance for AI may be behind the curve. Executives will need to prioritise AI governance to avoid integrating high risk as well as the new tech into their business models.



