The World Economic Forum estimated recently that AI would displace 85m jobs by 2025, but it also believes it will create an additional 97m new jobs over the same period.
However, like most developments in technology and business, the devil’s in the detail, and it is essential that boards and leadership are aware of exactly what’s at stake.
First, the creation of these purported new jobs is not guaranteed. A great deal will depend on various socio-economic factors, such as the rate at which AI is adopted, levels of investment, research and development, and the availability of skilled workers.
As a result, the eventual AI jobs loss-creation reality is likely to be somewhere in the middle.
Of course, AI has the potential to create—and is already creating—new opportunities in a wide array of industries, including healthcare, education, defence, and technology.
Almost all industries will be impacted and many new jobs requiring advanced degrees will appear, particularly roles such as machine learning engineers, robotics engineers and data. At the same time, there will be high demand for less-skilled, but still specialised, work, such as in AI systems maintenance.
Other jobs will be eliminated and this is likely to disproportionately affect workers in certain demographic groups: for example, those with lower levels of education or who work in industries which are highly susceptible to automation.
Good for everyone?
It is essential to ensure that the benefits of AI are shared fairly and equitably and that workers are given the necessary training and support to adapt to changing job markets. This requires proactive measures, including:
• Investing in education and training programmes
• Implementing policies to support workers who are displaced by automation
• Promoting a more equitable distribution of the benefits from AI progress
• Involving all stakeholders in the deployment of AI, ranging from workers and businesses, through to policymakers and civil society organisations.
The impact of AI on the labour market and society will largely depend on how it is developed and deployed, along with the policies and practices put in place to govern its use.
It is also important that the negative impacts of AI are mitigated. For example, its use in the criminal justice system has already raised concerns about the potential ethnic bias of programmers – in one case, systems proved 19% less accurate at recognising images of black men and women, compared with white individuals.
Here are seven points to be acutely aware of as the introduction of AI continues apace:
1. Keep AI projects SMART
Going back to basics, it is always crucial to ensure projects are SMART: Specific, Measurable, Achievable, Relevant, and Time-bound. Boards need to understand AI initiatives and be able to clearly address how stakeholders will be affected by plans and outcomes. If it’s difficult to clearly explain how it works, there is a good chance it won’t align with the organisation’s strategy.
2. Beware unproven tech
Boards should be on the look-out for sometimes unproven or unethical technology. If an AI proposal forces the board into considering the potential negative outcomes for the organisation’s reputation, or how the company’s values will be tested, then it is unlikely the system will be a good strategic fit.
3. The impact on work and society
Always keep in mind important ethical questions about how AI will impact the labour market and society as a whole. Yes, it will clearly create new jobs, but it can also exacerbate income inequality, displacing workers—and customers—who lack the basic skills needed to compete in this rapidly changing market. How could this play out in the short and long term?
4. Balancing big data
AI will help businesses do more with Big Data. Machine learning and deep learning require huge amounts of information to be standardised, and this can be quickly processed and unlocked to the benefit of large organisations with equally large databases. Again, not every last bit of data that can be scraped out is required, only the rich material packed with insights and signals that will prove directly useful to the strategy.
5. The future is malleable
To capitalise on AI’s growth and improvement, organisations need to think carefully about its deployment and ensure AI technologies are highly malleable. This means they should constantly evolve and transform as they interact with other technologies, users, and their operating environment. For example, social media platforms such Twitter (recently rebranded as ‘X’), are highly adjustable and constantly change in response to user behaviour and the evolving technological landscape.
6. Cybersecurity’s value will grow
As AI continues to advance, the likelihood of cyber attacks will expand. The more organisations come to rely on AI as a key structural component of their business, the more hackers will create viruses that seek to disguise malicious software activity. The importance of prevention and protection cannot be overstated.
7. People still matter
AI tools and automation will amplify processes and increase productivity but the strongest benefits could still prove to be indirect. Organisations will require new kinds of measuring systems to assess value and identify opportunities, such as enabling workers to ditch repetitive tasks and interact more directly with clients.
On this final point, and as users interact with AI and generate content themselves, the platforms’ algorithms learn and adapt to their preferences and behaviours, forming emergent structures featuring personalised news feeds, trending topics and recommended content.
This malleability enables the creation of innovative applications and services aimed at better meeting users’ needs, and which adapt to changing circumstances and new structures that organisations may not have anticipated or planned for.
Leaders should always appreciate that AI algorithms are trained on historical data, which may reflect existing biases, discrimination and assumptions.
For example, if historical data is biased against certain groups, the algorithm will take action accordingly. In addition, the lack of transparency in these systems makes this difficult to detect and correct.
As a result, AI should not be seen as a substitute for human judgment. While it can provide valuable insights, information and deliver quick results, it should always be used as a support tool, rather than to replace human decision-making.
In 2019, a 28-year-old man was behind the wheel of a Tesla Model S on autopilot when it ran a red light in Gardena, California, slammed into another vehicle and killed two people. He was sentenced to two years’ probation after pleading no contest to vehicular manslaughter.
As AI continues to become more advanced and prevalent across society, our current liability laws will have to adapt to ensure that responsibility and accountability are appropriately judged for any harmful outcomes.
When it comes to assigning blame for AI-related harm it could be increasingly difficult to determine who is ultimately responsible. Is it:
• The developer and creator of the algorithm?
• The user who deployed it?
• The AI system itself, which may have been taught to make a mistake through data inputs?
• A combination of all the above?
The most significant asset that goes hand in hand with the AI revolution is ‘trust.’
Stakeholders need to be confidently informed and sure about the consistent quality and outcomes of AI decisions that can affect their everyday lives and wellbeing. The wide-ranging adoption of AI in business is still in its earliest stages and boards have to remain ahead of the next grand steps.
Andrew Kakabadse is professor of governance and leadership, and Nada Kakabadse is professor of policy, governance and ethics, both at Henley Business School.