Directors should be held to account in law if they fail to show they understand and document the role of AI in board decisions, according to leading academics.
The two Italian professors say that the EU’s business judgment rule (BJR), which protects directors from liability, should be modified accordingly. They see the use of artificial intelligence in boardrooms for decision-making as having significant governance implications and say the BJR must be adapted.
Writing for the Oxford Business Law Blog, Maria Lucia Passador and Maria Lillà Montagnani, argue: “The BJR 2.0 we propose preserves protection only for those who can demonstrate informed stewardship—directors who engage critically with algorithmic tools, demand traceability and document the rationale for relying on machine-generated insights.”
The BJR essentially says that directors are protected from legal liability—or receive “deference”— for decisions that are “informed”, made in good faith and in the belief they are in the best interests of the company.
Passador and Montagnani, both of Bocconi University in Milan, add: “Deference without understanding is deference to no one.”
As companies around the world race to deploy increasingly sophisticated AI tools, the question of regulation and governance has become a hot topic. Often the debate is about governance of AI in business operations. But there is increasing focus on the governance of AI’s role in boardroom deliberations.
A duty of AI due care?
Passador and Montagnani also argue that EU fiduciary duties must change to accommodate AI, too. First, the “duty of care” directors have must become a “duty of AI due care”. This would demand “cognitive adequacy: the capacity to question, understand and monitor the technological tools shaping corporate choices”.
They add: “Directors need not become coders, but they must know which questions to ask and how to interpret the answers.”
The blog also states that directors should face a “duty of AI loyalty oversight”. This addresses the fear that bias built into AI systems might not best serve a company’s needs. “Loyalty” must mean checking that AI systems “serve the company’s purpose rather than silently displacing it”. Or, to put it another way, AI “heightens the obligation to verify that delegated systems remain impartial and aligned”.
If all of this were built into EU legal assumptions, it would mean that boards “must establish substantive oversight architecture: dedicated AI governance committees, clear escalation channels for algorithmic anomalies and integration of AI risk into audit and ESG frameworks”.
This is not the first time AI’s contribution to boardroom decision-making has raised questions. Much of the discussion had focused on how AI might increase directors’ liabilities, or whether board members have the skills to properly supervise the risks and opportunities presented by AI.
Recent conversation has centred on the potential for reliance on technology to undermine the ability of board members to hone their critical thinking skills.
All of these issues remain evolving concerns. The only thing known for sure is that the technology, and in some cases its deployment, is moving faster than skills development or governance structures. There is much more to come.



