We all have a lot of work to do. Whatever else, we do know that new artificial intelligence (AI) and generative AI technologies are going to require much more of us—and very different ways of thinking. There are good reasons to be excited and feel compelled, such as the implications for medical advances. And there are good reasons to be worried and feel rushed.
Those with the most excitement, and most knowledge, know that these tools need to be handled very carefully and carry genuine risk. Those who are most worried, worry about misuse, and even more fundamentally, worry that we won’t—or simply can’t—execute carefully.
Policymakers continue to discuss how to regulate, and equally, the hype-cycle is in full swing. Meanwhile, business leaders have work to do and decisions to make. They are struggling to balance priorities, allocate resources and fulfill their duties in a dynamic, complex, and novel environment. Fundamentally, they have opportunities and risks to manage.
One thing is abundantly clear: 18th-century board practices are not up to the challenge of the next 18 months of technology and global disruption. This may warrant substantial adjustments to board work over time but, in the immediate term, there are plenty of places to start.
Immediate business challenges
Generative AI puts into everyone’s hands new—and incredibly powerful—ways to create and evaluate content, to bend cost curves and redefine work, education and play in profound ways. In this emerging landscape, what is reasonable or managed risk? Moving too quickly or too slowly? Piles of governance or not enough? What is reasonably foreseeable? Reasonable business judgment? The answers are neither ‘nothing goes’, nor ‘anything goes,’ but instead much more nuanced.
Here are five things to consider:
1. Content created by AI/generative AI isn’t always correct, appropriate, or even very useful. The accuracy issues will improve over time, but always warrant close inspection. Transparency, explainability, privacy and bias are all real issues that require focused attention, to earn and maintain customer and employee trust and to meet increasing compliance standards. Leadership needs the right expertise, tools and governance to assess inputs and outputs in order to align AI outcomes with organisational goals and values.
2. Generative AI will bring profound change to the workplace, but no one right now can describe those changes, their proportions or scale. These changes will be subtle and dramatic at the same time. Robots are not likely to be the first AI agents displacing jobs, but new tools and skill sets will transform what we do and how we do it; how we value expertise will change, as will how we evaluate expense and risk. Organisations need to be ready, building more flexibility and agility in creating strategy, product development, and training.
3. The use of these tools and capabilities now exists outside well-established organisational controls, not unlike the late 2000s, when smartphones immediately put into everyone’s hands new ways to connect with one another and the workplace.
Companies and governments (or heads of household for that matter) are no longer the gatekeepers for powerful technologies that will impact how most things get done or go down. Democratisation will enable innovation along with unintended consequences, so the entire workforce needs guidance and skills to understand the impacts of their creations.
4. The pace of change makes it impossible to properly keep up, let alone internalise significant technology and policy developments. Coverage of these developments often lacks critical nuance and specifics. Leadership needs dedicated resources and trusted sources pointed at their priorities.
5. Disinformation and deep fakes are now low cost and virtually undetectable. This is a profound challenge for every aspect of corporate life. Constant vigilance, training and adaptation will be required to stay ahead of, let alone defend against, risk.
As noted, eventually, we may need big changes in how boards function, but today, even small changes can bring big benefits.
This is a moment to make more room in board meetings for meaningful questioning, opportunities to interrogate assumptions, and for building new models for priority setting and risk management. It is an opportunity for leaders to lead, setting standards on how generative AI (and whatever comes next) is used and how it is not.
More agile meeting agendas that are not packed with standing items set a year in advance are better suited to effective risk management in times of rapid change. The right internal and external experts should be at the ready to help the board make informed decisions and achieve necessary interdisciplinary perspectives.
Likewise, boards will benefit by having established, consistent and high-quality sources of knowledge about emerging technologies and how they work in context as well as their related governance and risks.
These are generalisations, of course, but boards will not be immune from the changes that are coming. The most informed AI developers and researchers, who are closest to the new tools, want all of us—including boards—to pay closer attention to the implications, to lean in, to ask questions, think and govern, in order to actively design the future. And they know that the questions are going to get harder as the technologies get better.
Karen E Silverman is CEO of AI and governance consultancy The Cantellus Group.