The Australian business landscape in 2026 is characterised by a fragmented adoption of artificial intelligence. While top-line figures suggest that nearly 60% of businesses now use AI in some form, granular data from the National AI Centre shows that actual operational integration among SMEs sits closer to 37%, compared with more than 84% for large enterprises.
This uneven acceleration has outpaced traditional risk management frameworks. AI tools are being embedded into workflows faster than governance structures are being built around them. The result is a growing “governance vacuum” — one that professional liability insurers are now filling with absolute artificial intelligence exclusions.
For professional services firms, this shift represents a structural change in risk transfer. AI is no longer treated as a productivity enhancement; it is now underwritten as a material operational exposure. Firms that cannot demonstrate evidence-based oversight are discovering that traditional insurance protection has quietly been withdrawn.

Section I: The Convergence Crisis
Between January 2025 and January 2026, the professional liability market experienced a structural break. What many firms assumed would be a gradual tightening of underwriting standards became a sudden bifurcation. Some organisations renewed with affirmative AI coverage — albeit heavily conditioned. Others encountered sweeping exclusions that effectively removed AI-related claims from their policies entirely.
This was not a coincidence. It was convergence. Three forces aligned simultaneously, each amplifying the others, to produce an underwriting environment unlike anything professional services firms had previously navigated.
1. Judicial Precedent Became Predictable Liability
Courts in the United States and Canada issued rulings that transformed AI misuse from an emerging ambiguity into an established professional risk. These decisions clarified a principle that now anchors the underwriting calculus: artificial intelligence is not an autonomous actor in the eyes of the law. It is a tool. Tools do not bear liability — organisations do.
Once a court establishes that a risk is foreseeable, the insurance market recalibrates. Underwriters cannot price what they cannot define. Judicial clarity does not reduce premiums; it creates a pricing floor and, in some lines, a ceiling above which coverage simply disappears.
2. Reinsurers Quantified Aggregation Risk
The structural driver behind the market shift is not the individual claim. It is the correlated claim. According to the Swiss Re Institute’s sigma 5/2023 study on the economics of digitalisation in insurance, the global value of intangible assets of listed companies — which increasingly includes AI-enabled data, algorithms, and digital processes — grew fivefold over twenty years to reach USD 76 trillion as of 2021. [Source: Swiss Re Institute, sigma 5/2023, ‘The economics of digitalisation in insurance’, October 2023. Available at: https://www.swissre.com/institute/research/sigma-research/]
For reinsurers, this concentration of AI-dependent value introduced a systemic aggregation scenario that could not be ignored. Consider the three failure modes that keep actuaries awake at night:
- A hallucination pattern embedded in a widely used generative model triggers simultaneous errors across thousands of professional engagements.
- A flawed training dataset deployed across multiple SaaS platforms produces structurally identical errors in accountancy, legal, and financial advisory outputs at scale.
- A single vendor’s silent AI integration — an automatically enabled feature the customer never knowingly activated — generates correlated professional liability claims across its entire subscriber base.
The message from the reinsurance layer was commercially unambiguous: unquantified generative AI risk would not be absorbed at existing premium structures. Primary insurers adjusted accordingly, beginning the process of moving from silent tolerance to explicit exclusion.
3. The Coordinated Withdrawal of Silent Coverage
Before 2025, most professional liability policies were silent on AI. That silence created interpretive ambiguity — and in many jurisdictions, courts apply the doctrine of contra proferentem, construing ambiguous policy language against the insurer. For several years, this gave professional services firms a measure of inadvertent protection.
Beginning in 2025, carriers introduced targeted endorsements and exclusionary forms. By January 2026, silence had largely disappeared from renewal documentation. This is the AI Insurance Cliff. It is not a gradual slope or a predictable tightening. It is a market bifurcation with a clear hinge point.
The Market Bifurcation — At a GlanceJanuary 2025: Transitional exclusions introduced; AI disclosure questionnaires become standard at renewal. January 2026: Affirmative warranties required for coverage, or absolute exclusions applied. The choice is no longer the firm’s — it is the underwriter’s, made based on documented governance evidence.
Firms that entered 2025 treating AI as a productivity issue discovered by renewal that it had become an underwriting issue. The firms that entered 2026 with documented governance frameworks discovered that evidence is the new currency of insurability.

Section II: From Silent AI to Affirmative Warranties
The Silent AI Era (2021–2024)
During this period, generative AI adoption in professional services accelerated faster than in underwriting language. Law firms used AI for drafting and research. Consultants relied on AI for strategic analysis and report generation. Accounting practices deployed AI for data extraction and reconciliation. Marketing agencies generated entire campaign strategies with AI assistance.
Yet most Errors & Omissions and Professional Indemnity policies did not explicitly reference artificial intelligence. This created Silent AI coverage — a risk that was neither expressly included nor excluded. The doctrine of contra proferentem meant that coverage disputes were often resolved in the insured’s favour. Coverage was not guaranteed, but ambiguity was a useful shield.
That shield has been withdrawn.
The Transition Phase (2025)
In 2025, insurers introduced a wave of supplemental documentation requirements: broad AI-related exclusions in endorsement form, supplemental AI disclosure questionnaires at renewal, and underwriting questionnaires seeking granular information about AI governance controls, vendor selection processes, and output verification protocols.
This phase was exploratory. Carriers sought data before standardising their response. The market was signalling discomfort — but had not yet converged on a uniform exclusion architecture.
The Affirmative AI Era (2026 and Beyond)
The decisive shift arrived with the introduction of two landmark instruments. First, Insurance Services Office Form CG 40 47 01 26, a Commercial General Liability endorsement with an effective date of January 2026, which excludes bodily injury, property damage, and personal and advertising injury arising out of generative artificial intelligence. It is important to note that CG 40 47 applies to the CGL coverage part — not directly to Errors & Omissions or Professional Indemnity policies.
However, it represents the broader industry architecture shift and signals the direction of travel across all liability lines. Second, and directly relevant to professional services firms, W. R. Berkley Corporation introduced Form PC 51380, an ‘absolute’ AI exclusion specifically designed for D&O, E&O, and Fiduciary Liability products — the lines that directly protect professional services firms. These are not cosmetic changes to existing policy language. They restructure the coverage architecture.
In the affirmative AI era, coverage is no longer presumed. It is conditioned. Firms seeking coverage must typically warrant that:
- AI outputs are reviewed by qualified personnel before acting upon them professionally.
- Verification procedures are consistently followed and documented at the time of use.
- AI systems in use across the organisation are inventoried and individually risk-assessed.
- Documentation can be produced upon request — not retrospectively constructed, but contemporaneously created.
The shift is from ambiguity to affirmation. Coverage is no longer a default. It is a deliverable. And that deliverable is documentation.
Section III: Landmark Cases as Liability Bedrocks
Two decisions reshaped professional liability doctrine in the AI context and, in doing so, forced the underwriting market to price what courts had confirmed.
Mata v. Avianca (2023)
In 2023, attorneys filed a federal court brief containing fabricated case citations generated by an AI tool. The citations appeared authoritative. They were invented. The court sanctioned counsel under Rule 11 and issued a ruling emphasising that legal professionals carry a non-delegable duty to verify the accuracy of their submissions. While technically a federal sanctions decision rather than a broad liability statute, Mata v. Avianca has been widely adopted across professional disciplines as authoritative evidence that AI hallucination is a foreseeable and non-excusable failure mode — a status that matters as much to underwriters as to courts.
The significance of this decision extends far beyond legal practice. It establishes three principles that now underpin AI liability across professional services:
- AI hallucinations are foreseeable. A professional cannot claim ignorance of a known technological failure mode.
- Verification is mandatory. The existence of a review duty cannot be contracted out to software.
- Delegation to an AI system does not reduce professional responsibility. The output belongs to the professional, not to the tool.
For insurers, this ruling converted AI misuse from an unpredictable novelty into a known and foreseeable professional risk. Known risks are underwritten explicitly or excluded.
Moffatt v. Air Canada (2024)
In 2024, the British Columbia Civil Resolution Tribunal (BCCRT) — a provincial small claims administrative tribunal — addressed liability arising from inaccurate information provided by an airline chatbot. The organisation argued that the chatbot operated independently and should therefore bear responsibility for its own outputs. The tribunal rejected this argument in direct terms, describing it as ‘a remarkable submission.’
The BCCRT ruled that the organisation deploying an AI system is responsible for its outputs. AI agents are treated as extensions of the enterprise — not as independent legal entities capable of bearing their own liability. It is worth noting that BCCRT decisions carry no formal legal precedent under Canadian law, given the tribunal’s small claims mandate. However, the case generated international coverage and has been widely cited by legal commentators, insurers, and regulators as a clear statement of the organisational liability principle that underpins current underwriting thinking.
This ruling is particularly significant for professional services firms because it extends organisational liability to automated communications, AI-assisted client interactions, and any AI-generated output that influences a third party’s decisions.
The Combined Effect: The End of ‘We Didn’t Know’
Individually, these cases address verification obligations and organisational responsibility respectively. Together, they eliminate what had been a common defence: ignorance of AI’s limitations.
After these rulings, AI errors are legally foreseeable. Oversight duties are judicially established. Organisational liability for AI outputs is affirmed. In underwriting terms, foreseeability drives pricing and exclusion language. Once a risk becomes legally predictable, it becomes actuarially modelled — and therefore expressly addressed in policy language.
What underwriters now know that they didn’t know before 2024
Before these rulings, AI liability in professional contexts existed in legal ambiguity. After them, it exists in established doctrine. Mata v. Avianca confirmed in federal court that AI hallucination is foreseeable. Moffatt v. Air Canada confirmed through a widely-cited provincial tribunal that organisations are responsible for their AI systems’ outputs. The ‘we couldn’t have known’ defence is no longer available — to the firm, or to the insurer trying to justify covering the loss. The result is not theoretical tightening. It is contractual exclusion language in your next renewal document.
Section IV: Decoding the Exclusion Language
Understanding the specific language in current exclusion forms is not a legal exercise. It is a risk management exercise. The words in these forms determine whether a professional liability claim arising from AI use will be covered or denied.
The Berkley PC 51380 Architecture: ‘Arising Out Of’
The Berkley PC 51380 form — introduced as the market’s most aggressive AI exclusion to date, targeting D&O, E&O, and Fiduciary Liability products — excludes claims arising from the actual or alleged use, deployment, or development of artificial intelligence by any person or entity. It is worth noting that as of early 2026, full commercial rollout of this form across all jurisdictions remains in progress, with regulatory approval still pending in some states. Industry analysts have debated the breadth of its definition. However, the form’s existence and its exclusion architecture have already shaped how other carriers are drafting their own AI provisions, and policyholders should treat its language as directionally representative of where the market is heading, if not yet universally applied.
Two components of this phrasing require careful attention.
‘Arising Out Of’ — A Broad Causal Standard
In insurance law, the phrase ‘arising out of’ does not require proximate cause. It requires only a connection. If AI contributed to the circumstances that produced a claim — even indirectly, even peripherally — the exclusion may trigger. This is a substantially lower threshold than many policyholders assume.
‘Any Person or Entity’ — The Vendor Liability Trap
This is the clause that most professional services firms have not yet processed. The exclusion applies not only to AI systems you deploy directly, but also to AI systems used by any third party whose outputs contributed to the claim.
Consider a scenario that is already occurring across professional services: a firm uses a SaaS accounting or practice management platform whose vendor has integrated generative AI for automated reporting, summary generation, or predictive analytics. The vendor activates the feature by default. The firm does not knowingly use it. An AI-generated misstatement reaches a client. The client suffers financial harm.
Even if the firm never deliberately activated the AI feature, the claim arises out of AI used by another entity. The broad exclusion language may shift the entire loss back to the firm.
The risk is no longer limited to the AI tools you chose. It extends to every tool your technology ecosystem chose for you.
Modern software vendors routinely embed AI capabilities as default features: predictive scoring, automated summaries, recommendation engines, and natural language assistance. These are frequently opt-out rather than opt-in. Firms that have not audited their vendor stack for embedded AI are carrying undisclosed exposure in every engagement.

Section V: The Human Oversight Warranty
In direct response to aggregation risk concerns from the reinsurance layer, primary insurers increasingly require a Human Oversight Warranty as a condition of coverage. This warranty is not aspirational language. In insurance law, warranties are conditions precedent to coverage — meaning that if a warranty is breached, the policy itself may be void.
What Underwriters Are Now Asking For
The structure of human oversight warranties varies by insurer and endorsement, but common requirements include the following:
- All AI-generated outputs must be reviewed by qualified personnel before use in professional engagements.
- Verification must be documented at the time of review — not reconstructed after a claim arises.
- Logs must be retained in a form that can be produced to insurers upon request.
- AI must not be used for specified high-stakes decisions without manual validation by a named responsible professional.
The Decisive Distinction: Intent vs. Evidence
Many professional services firms have some version of an internal AI policy. Some have training programmes. Some have compliance memos. These are expressions of intent. Underwriters are no longer asking about intent. They are asking for artifacts.
An artifact, in the governance sense, is a contemporaneously created, timestamped record that demonstrates that a specific human reviewed a specific AI output before it was acted upon professionally. The distinction between a policy statement and a governance artifact is the difference between saying ‘we take this seriously’ and proving it.
The Evidence Standard Has Changed
Pre-2026: An AI policy document and a training record were sufficient to satisfy most underwriter questionnaires. Post-2026: Underwriters are requesting timestamped review logs, version control documentation, audit trails, and formal governance records that demonstrate contemporaneous compliance — not retrospective attestations.
The operational implication is significant. AI governance is no longer a cultural or values question. It is an evidentiary question. Professional firms must move from informal oversight — ‘we always check AI outputs’ — to structured artifact creation: documented, timestamped, auditable records that exist independent of anyone’s recollection.
Frequently Asked Questions
Q1: Who is liable when an AI tool produces a professional error?
The deploying organisation and the responsible professional remain liable. Courts across multiple jurisdictions treat AI as a tool, not an autonomous legal actor. The same liability doctrine that applies to other professional tools applies to AI.
Q2: What is Silent AI coverage, and does it still exist?
Silent AI coverage existed when professional liability policies neither explicitly included nor excluded AI-related risks. Underwriting ambiguity is often resolved in the insured’s favour under contra proferentem principles. This era has largely ended. By 2026, most major carriers have introduced explicit AI endorsements that eliminate the interpretive gap.
Q3: Can a firm argue that the AI made the mistake, not the professional?
No. This defence has been considered and rejected in multiple jurisdictions. Post-Mata v. Avianca, the ‘the tool was responsible’ argument is not available. The professional who deployed the tool and acted on its output carries the liability.
Q4: What is the Vendor Liability Trap?
The risk that broad exclusion language — particularly the ‘any person or entity’ phrasing in forms like Berkley PC 51380 — captures AI use by third-party software vendors embedded in the firm’s technology stack, even where the firm did not knowingly activate the AI feature. The BCCRT’s reasoning in Moffatt v. Air Canada — that a company is responsible for all information on its platform regardless of which component generated it — reinforces the organisational accountability principle this trap exploits.
Q5: Are human oversight warranties negotiable?
Sometimes, but typically only in exchange for sublimits, higher deductibles, elevated premiums, or detailed governance documentation produced at underwriting. Negotiability generally requires demonstrated governance maturity — which, in practice, means producing artifacts.
Q6: Does disclosing AI use on the renewal questionnaire guarantee coverage?
No. Disclosure preserves underwriting eligibility — it prevents claims from being denied on misrepresentation grounds. It does not eliminate exclusions or override warranty conditions.
Q7: What is aggregation risk and why do reinsurers care about it?
Aggregation risk refers to the possibility that a single AI failure — a hallucination pattern in a widely used model, a flawed training dataset deployed at scale — generates correlated professional liability claims across thousands of insureds simultaneously. Reinsurers cannot absorb correlated losses across their entire portfolio without pricing them explicitly.
Q8: What steps should a firm take immediately?
Conduct a comprehensive AI inventory covering both direct tools and embedded vendor AI. Map third-party dependencies. Implement documented, contemporaneous verification protocols. Begin building a governance artifact record. Consult insurance advisors before renewal — not during it.
Conclusion: The End of Ambiguity and the Beginning of Evidence
The AI Insurance Cliff marks the end of an era. For several years, professional firms operated in a zone of policy silence and legal ambiguity that provided inadvertent protection. That protective fog has lifted.
Judicial precedent has clarified liability. Reinsurers have quantified systemic exposure. Primary carriers have rewritten coverage language. The market has bifurcated between firms with documented AI governance and firms without it.
The question is no longer whether AI introduces professional risk. That question was answered in 2024. The question now is whether your organisation can demonstrate — contemporaneously and evidentially — that it governs that risk.
In 2026, coverage is not assumed. It is engineered. And the dividing line between protection and exposure is not policy intent. It is documentation.
Documentation is not a bureaucratic burden. It is the asset that makes your firm insurable. The firms that understand this are building governance artifact systems. The firms that don’t are discovering the full meaning of absolute exclusion at their next renewal.
The practical framework for building that governance artifact system — the specific types of documentation that satisfy human oversight warranties, the Verifiable Human Contribution (VHC) protocol (patent-pending: AU 2025220863; PCT/IB2025/058808), and the Proof Before Scale™ methodology for determining which AI use cases deserve risk capacity — is the subject of Pillar 2 in this series.
For firms seeking a comprehensive, implementation-ready governance system, The Governance Artifact System (available on Amazon) provides the complete framework: the artifact templates, the six-week implementation sprint, and the audit trail architecture that converts informal AI oversight into insurable evidence.
About the Author & Disclosures
John Cosstick is Founder-Editor of TechLifeFuture.com and winner of the 2024 BOLD Award for Open Innovation in Digital Industries. He is a former banker, accountant, and certified financial planner. He is now a freelance journalist and author. John is a member of the Media Entertainment and Arts Alliance (Union). You can visit his Amazon author page by clicking HERE.
Transparency and Disclosures
This article is part of a multi-pillar editorial series on AI governance and professional liability insurance for TechLifeFuture.com.
Affiliate disclosure: This article may contain affiliate links for Mindhive.ai, where John Cosstick is a Partner and minor shareholder. If you purchase a product or service through a linked affiliate, TechLifeFuture.com may receive a commission at no additional cost to you.
Intellectual property disclosure: The author developed the Verifiable Human Contribution (VHC) framework and Proof Before Scale™ methodology referenced in this article. These are patent-pending (AU 2025220863; PCT/IB2025/058808). While the VHC framework is designed to align with ISO/IEC 42001 and NIST AI RMF requirements, professionals should evaluate any governance framework against their specific regulatory obligations and risk profile.
All analyses are provided for informational and educational purposes only and do not constitute legal, financial, or professional advice. Readers should consult qualified professionals before acting on any information contained in this article.
References & Further Reading
(These sources underpin the legal, insurance, and governance analysis presented in this article.)
Swiss Re Institute (2023).
The economics of digitalisation in insurance (sigma 5/2023).
Swiss Re Institute Research. October 2023.
https://www.swissre.com/institute/research/sigma-research/sigma-2023-05.html
Insurance Services Office (ISO) (2026).
Form CG 40 47 01 26 – Exclusion – Generative Artificial Intelligence.
Commercial General Liability Endorsement (effective January 2026).
W. R. Berkley Corporation.
Form PC 51380 – Artificial Intelligence Absolute Exclusion.
Directors & Officers, Errors & Omissions, and Fiduciary Liability Products (2025–2026 filings).
Mata v. Avianca, Inc. (2023).
United States District Court, Southern District of New York.
Federal Rule 11 sanctions decision relating to AI-generated fabricated citations.
Moffatt v. Air Canada (2024).
British Columbia Civil Resolution Tribunal (BCCRT).
Decision establishing organisational liability for chatbot-generated misinformation.
Willis Towers Watson (WTW) (2025).
Insuring the AI Age: Managing Emerging AI Liability Risks.
https://www.wtwco.com/en-us/insights/2025/12/insuring-the-ai-age
Reuters Legal (2024–2025 coverage).
Multiple reports on generative AI liability, professional responsibility, and insurer response following Mata v. Avianca and related cases.
Australian Prudential Regulation Authority – Australian Prudential Regulation Authority (2025).
CPS 230 Operational Risk Management Standard.
Effective 1 July 2025.
https://www.apra.gov.au/cps-230-operational-risk-management
Optional (if you want one governance-oriented source):
AI Safety Institute Australia (2025).
Guidance for AI Adoption (Version 2.0).
Australian Government.














