Just a short while ago, most white-collar professionals barely noticed the presence of AI in their daily work. But everything has changed.
The arrival of generative AI—powerful tools that understand and create human-like language—has shifted the landscape dramatically since 2023. These tools aren’t simply enhancing productivity—they’re reshaping the roles of professionals and how work gets done.
Take accounting, for example. When ChatGPT 3.5 was released in late 2022, it couldn’t pass major accounting exams. But, by early 2023, ChatGPT 4.0 was outperforming human candidates on tests such as the CPA and CMA.
In less than a year, it surpassed expectations and raised a profound question: what does professional excellence look like when machines can match—or exceed—human capabilities?
AI could automate up to 60% of the tasks performed by degree-holding professionals—and possibly up to 98% by 2030. For boards and leaders, this is a pressing call to prepare.
The human-AI gap
One emerging challenge is the sheer gap in processing power between humans and machines.
Human thought runs at about 10 bits per second. Meanwhile, our senses absorb a billion bits, while Wi-Fi handles 50 million bits per second.
AI systems analyse and act on vast amounts of data in parallel, something humans simply aren’t capable of. A chess master thinks through a handful of future moves. An AI engine evaluates millions at once.
So, how do people keep up, let alone collaborate, with something which ‘thinks’ that fast?
Preparing people for AI-enhanced work
Upskilling is a given, but it’s not just about learning how to use new tools; it’s about evolving how we work and think.
Professionals need to lean into a new set of core competencies: communication, critical thinking, leadership, adaptability and ethical decision-making.
These skills must be transferable and ready to apply across roles and industries as AI redraws the boundaries of work. Communication especially needs a rethink. The line between spoken and written language is fading, as AI now powers everything from speech-to-text through to presentations. Leaders must not only use these tools, but also understand how they transform interaction itself.
Contrary to what some may fear, writing isn’t becoming obsolete—it’s becoming more important. But the focus is shifting somewhat.
Effective writing in an AI world must be strategic, ethical and inspirational. Professionals will need to write with purpose in order to motivate, lead and clarify. Creativity will matter even more.
At the same time, AI can bring ethical pitfalls. It may distort the truth, introduce bias or mishandle private data. This is why writing today requires more than just skill—it demands integrity.
The board as ethical steward of AI
Boards now face a critical test of leadership: as AI becomes central to business operations, oversight cannot be treated as an afterthought. Ethics have to be front and centre—beginning with data privacy, algorithmic bias and an appreciation of the impact on jobs and workplace relationships.
Transparency is essential. Boards must foster open dialogue around sensitive issues. Employees need clarity and reassurance as roles evolve.
This demands a high bar for governance. Integrity, courage and strategic vision aren’t merely ideals—they’re non-negotiable. Boards must also apply ethical frameworks, from virtue ethics through to consequentialism—all with the purpose of guiding decisions and fostering a culture which is grounded in trust.
Leading with vision and empathy
In an AI-enhanced workplace, leadership needs to become more ‘human’ than ever. Inspiring teams, nurturing creativity and championing collaboration should be at the forefront of leadership thinking.
Boards need to model these traits and empower their senior leaders to do the same. A new approach also means taking the long view. Equity and inclusion must remain at the forefront of AI deployment. Boards will want to push for policies that ensure AI benefits everyone, not just the few.
By doing so, they not only protect the organisation’s values but help shape a more just and responsible digital future.
Mending and preventing workplace fractures
Relationships remain the glue of any organisation, but they can also be fragile.
Breakdowns in trust—between board and C-suite, or between leaders and teams—can derail even the best strategies. These relational ‘fractures’ often come from unmet expectations, or unclear obligations.
When trust erodes, so does collaboration. Frustration builds, people disengage and performance suffers. Research shows that the damage from broken psychological contracts—what people believe has been promised to them at work—can outweigh the benefits of when those contracts are honoured.
AI introduces a new layer of complexity. It’s not just about human-to-human relationships anymore; AI is now part of the workplace dynamic.
While we don’t yet fully understand how human-AI relationships fracture, we know it’s coming. Boards will need to anticipate these pressure points and develop strategies to manage them before they can hurt culture and cohesion.
Are boards ready for what’s next?
Generative AI isn’t just another tech shift. It’s a game-changer. Boards must lead with both clarity and conscience. This means embracing innovation, while also ensuring ethics and human values are never sidelined.
This new era calls for transparency, transferable skills and a firm commitment to integrity. Boards must create a space where humans and machines can work together, and where people feel equipped, not replaced.
So here’s the question: Are today’s boards ready to lead in this new reality? The answer will shape not only the future of their organisations, but society at large.
Andrew Kakabadse is professor of governance and leadership, and Nada Kakabadse is professor of policy, governance and ethics, both at Henley Business School.



