Accountability for audit quality will remain with āpeopleā despite increasing use of artificial intelligence in the audit process, regulators this week warned.
The notice came from the Financial Reporting Council as it issued guidance on the use of generative and agentic AI in the auditing.
Mark Babington, the FRCās executive director of regulatory standards, says in statement that AI adoption is āacceleratingā and the guidance is designed to help with the risk of using the technology.
He adds: āIt is important to be clear, however, that while technology changes, the fundamental principle of our regulatory framework does not: it is people – the firms and responsible individuals – who are accountable for audit quality.
āAI is a powerful tool, but the professional judgement and accountability of the auditor remains at the core of how we regulate.ā
The warning comes in the wake of a decision by government to end further reform of the audit sector based on a belief that audit quality has improved in the period since the collapse of construction giant Carillion in 2018.
It also comes just a week after watchdogs announced they had changed the way audit firms would be monitored. FRC regulation will now involve more reliance on firmsā own quality management systems and undertake fewer formal inspections.
The FRC says the guidance on AI in audit is ānot a response to identified quality deficienciesā but rather ācodifies good practice, promotes audit quality, builds confidence in the use of these technologiesā.
The FRC says AI is now used in the audit process in a number of ways: to summarise minutes; check the accuracy of financial statements; language translation; categorising contracts; automation of processes and matching supporting documents with samples.
The guidance also notes there are key risks auditors face when using AI. This includes misinterpreting or misunderstanding AI output and using the technology in a way that does not meet audit standards.
And though the document does not cite āhallucinationā directly, it does detail ādeficient outputā as another key concern.
āThe risk can not be eliminated as LLMs have inherent limitations,ā the guidance says.
Last year, the FRC reviewed the way six big audit firms monitor their use of automated technology. While the review concluded āmost firmsā had processes in place to certify the use of automated tools, it also found that none monitored their impact on the quality of audit.
āFirms are encouraged to establish policies or metrics to support the continuous and consistent evaluation of how ATTs [automated tools] impact audit quality,ā the report said.



