A senior partner at KPMG Australia has been fined for using artificial intelligence tools to cheat on an internal training course about AI — an episode that has raised eyebrows across the accounting and professional-services sectors. The penalty and broader context of AI misuse within the firm highlight ongoing challenges around governance, ethics and the role of AI in the workplace.
Partner Penalised for Cheating in AI Course
A partner at KPMG Australia was fined A$10,000 (about US $7,000) after resorting to AI to complete a mandatory internal training course on artificial intelligence. The individual breached company policy by uploading training materials into an AI platform to generate answers for exam questions, prompting the firm to require a retake of the module.
KPMG Australia’s CEO, Andrew Yates, acknowledged the irony of the situation — a partner using the very tools about which the course was intended to educate — and emphasised the difficulty organisations face in regulating AI use as these tools become ubiquitous in daily work.
A Broader Trend of AI Misuse Among Staff
This case is far from isolated. According to KPMG, more than two dozen employees have been caught since mid-2025 using AI tools to bypass internal training exams and assessments that are designed to test their knowledge and responsible use of emerging technologies.
KPMG has implemented its own AI detection software to flag irregularities and is tightening monitoring and enforcement of its internal testing policies. The firm conducts thousands of internal exams annually, and the most recent monitoring efforts have revealed a spike in policy breaches as generative AI technologies have become more powerful and accessible.
Regulatory, Industry and Ethical Implications
The incident has resonated beyond KPMG’s internal walls, raising questions about professional standards, corporate governance and how organisations should balance the productive use of AI with integrity and accountability.
Critics have pointed to broader industry challenges. For instance, professional bodies such as the Association of Chartered Certified Accountants (ACCA) in the UK have moved away from remote exams due to concerns that safeguards can no longer keep pace with sophisticated AI-assisted cheating.
Senators and regulators in Australia have also expressed concern about enforcement mechanisms within professional services firms. During a recent Senate inquiry, Australian Greens Senator Barbara Pocock criticised what she labelled a “toothless system” for accountability and urged more robust reporting and disciplinary procedures.
From the regulatory perspective, the Australian Securities and Investments Commission (ASIC) confirmed the incident but noted that disciplinary action typically falls under the remit of professional trade bodies, meaning firms and partners have a responsibility to self-report misconduct.
Historical Context and Sector-Wide Patterns
Cheating scandals are not unprecedented in the accounting profession. In recent years, the Big Four accounting networks have faced scrutiny and fines related to internal misconduct. For example, in 2021, KPMG Australia was fined hundreds of thousands of dollars by a US audit regulator over widespread cheating on online training tests involving more than 1,100 staff.
Other major firms, including Deloitte, PwC and EY, were also fined by regulators elsewhere for internal exam cheating, underscoring systemic challenges across the industry.
Steps Toward Stronger AI Governance
In response to the current situation, KPMG says it is enhancing its educational campaigns, redesigning internal assessments to be more resilient to misuse, and deploying additional technology to detect policy breaches. The firm has also indicated it will disclose AI-related misconduct figures in its annual report as part of efforts to bolster transparency.
Experts suggest that professional services organisations may need to rethink how they assess practical AI competency — for example, moving toward supervised, in-person testing environments or redesigning examinations in ways that emphasise judgment and application over rote answers that can be easily generated by AI.
Implications for the Future of Professional Training
The incident raises fundamental questions about how industries that champion expertise and ethics can adapt to rapid advances in AI. As firms embrace AI for productivity and innovation, they must simultaneously cultivate a culture of responsible use and build safeguards that ensure internal integrity is not compromised.
The KPMG case stands as a reminder that technology — no matter how transformative — requires governance frameworks capable of keeping pace with its adoption, and that ethical lapses, even among senior professionals, can carry reputational and regulatory consequences.
7 years in the field, from local radio to digital newsrooms. Loves chasing the stories that matter to everyday Aussies – whether it’s climate, cost of living or the next big thing in tech.