ISO Makes AI Excludable
GenAI liability is getting pushed out of CGL and into Cyber, Tech E&O, and standalone AI
ISO’s generative AI endorsements have now moved from “emerging risk chatter” to deployable default language. As we reported on Monday January 26, carriers have a standardized, low-friction mechanism to carve GenAI-driven losses out of CGL across Coverage A, Coverage B, and Products/Completed Operations. The important point is not the existence of exclusions. It’s the way ISO accelerates adoption: once the language is clean, filed, and familiar, the market can shift in a single renewal cycle from “case-by-case ambiguity” to “default non-coverage.”
That shift forces a commercial reality: AI liability is being structurally unbundled from the baseline GL product and pushed into the specialty complex, whether carriers want it there or not. Coverage B-driven exposures like advertising injury, defamation, and IP-adjacent claims become harder to leave floating inside CGL. Products and completed ops become harder to underwrite when AI is embedded as a feature, not a vendor. The result is not simply tighter wording. It is premium migration toward Cyber, Tech E&O, media liability, and purpose-built AI forms, with more of the risk ending up in E&S where ambiguity can be priced and controlled.
The market’s early shape is already visible. One camp is reaching for blunt instruments: broad exclusions designed to eliminate “silent AI” exposure across multiple liability towers, even at the cost of broker friction and future insurability disputes. The other camp is moving to affirmative AI coverage, but only in controlled slices, with narrow grants, systemic protections, and underwriting that increasingly looks like risk engineering. Both directions are rational responses to the same underwriting problem: correlated failure modes, unclear attribution across vendor stacks, and weak claim forensics when logs, prompts, and versioning are not preserved.
The thesis is straightforward: the winners won’t be the carriers with the hardest exclusions or the loudest “AI strategy.” They’ll be the carriers that define tight AI boundaries brokers can sell, tie those boundaries to underwriting controls that actually move loss costs, and set an evidence standard that makes claims survivable instead of philosophical.
ISO Made “AI-Off” Default Language in CGL
ISO didn’t “change coverage.” It changed how fast the market can change coverage. With three January 2026 endorsements, carriers now have standardized, regulator-ready language they can drop into renewals without reinventing definitions, negotiating bespoke manuscripts, or relying on ambiguity to do the work later. The operational point is speed: once the language is clean and familiar, the market can move from “we’ll see how courts treat it” to “we’re not covering it” in a single renewal cycle.
What ISO shipped (and why the design matters)
ISO’s GenAI endorsements are deliberately modular. Carriers can exclude everything, exclude only the most litigable slice, or exclude products and completed ops where AI is embedded in what the insured delivers.
CG 40 47 removes Coverage A and Coverage B for claims arising out of generative AI
CG 40 48 removes Coverage B only for claims arising out of generative AI
CG 35 08 removes Products and Completed Operations bodily injury/property damage arising out of generative AI
That structure matters because it allows carriers to tune appetite without pretending GenAI is a single exposure. It also signals where ISO expects the market pain to show up first: Coverage B allegations and completed operations disputes tied to embedded AI.
The definition is broad on purpose
The endorsements define “generative AI” at the system level, not the vendor level. If a model trained on data can generate content or responses, including text, images, audio, video, or code, it falls inside the scope. The practical implication is that this is not limited to “chatbots.” It maps onto how GenAI is actually being used across commercial insureds: customer communications, marketing content, contract drafting, coding, and increasingly product features.
This is a content-at-scale risk, now inside liability.
Why Coverage B is the first pressure point
Coverage B is where GenAI creates immediate friction: defamation, disparagement, advertising injury, and IP-adjacent allegations are easy to plead, expensive to resolve, and hard to defend when provenance and human review are fuzzy. GenAI industrializes output while weakening the evidence trail. If a carrier wants to stop subsidizing that ambiguity quickly, stripping Coverage B is the cleanest lever.
Why Products and Completed Ops is the bigger signal
CG 35 08 is the part executives should read as the forward warning. Once GenAI is embedded in products or completed work, you’re no longer dealing with “a bad statement.” You’re dealing with downstream harm theories that look like defect, failure to warn, or reliance at scale. That is where accumulation risk starts to look real, especially when multiple insureds depend on the same upstream model behavior.
The market can live with “AI as a tool.” It cannot price “AI as a shared failure mode” inside baseline GL.
Why This Is a Market Structure Shift
The strategic signal here is not “ISO released endorsements.” It’s that GenAI liability is being pushed out of the industry’s default liability container and forced into specialty lines where it can be priced, constrained, and audited. That is a market structure shift, not a tightening cycle. CGL has always been the catch-all where ambiguous allegations go to die slowly and expensively. Generative AI is the exact kind of exposure that benefits from that ambiguity, which is why carriers are moving to end it.
This is the “silent AI” moment, and the market is explicitly pattern-matching to silent cyber. The point is not that the perils are identical. The point is that ambiguity at scale is uninsurable at scale. When a fast-growing exposure sits inside legacy forms without clear boundaries, it produces surprise coverage, unpredictable litigation, and reserve distortion. Carriers learned that lesson the hard way in cyber. This time, they are choosing to draw the line early, before the loss curve gets a chance to teach the lesson again.
Silent cyber was fixed after losses. Silent AI is being fixed before them.
Evidence the market is already moving
The endorsement rollout is arriving into a market that has already started splitting into defensive retreat and controlled affirmative coverage.
On the exclusion side, carriers are putting AI language into multiple liability towers, not just GL, because the loss theories don’t respect product lines. Evercore has flagged that insurers are introducing exclusions for claims arising out of GenAI to protect against silent coverage, explicitly paralleling the silent cyber cleanup. Cowen has framed AI as the next “new, complex risk” that will live in E&S first, before anything resembles standardization.
On the affirmative side, Lloyd’s-backed entrants are launching standalone AI liability products designed for the coverage gap that exclusions create. That’s the key tell: the market is not debating whether AI is insurable. It’s deciding where it belongs and under what constraints.
Why the risk is migrating to E&S, Tech E&O, and Cyber
AI liability is not being pushed out of CGL because carriers think it’s “scary.” It’s being pushed out because it breaks the core mechanics that make CGL workable at scale.
1) Correlation and accumulation are real, not theoretical
A single upstream model change can propagate across thousands of insureds simultaneously. That is not how GL is built to behave. It is how catastrophe behaves.
2) Attribution is structurally messy
AI stacks are multi-party by design: foundational model, fine-tuning layer, application wrapper, vendor integrations, internal controls, and human oversight. When a claim hits, the fight becomes “whose AI” and “whose failure” before it becomes “what damages.” That is a coverage dispute factory.
3) The claims evidence trail is weak across most insureds
Forensics are straightforward when you can recreate the event. AI-driven incidents are often not replayable unless prompts, outputs, model versions, and control logs are preserved. Without that, claim adjudication becomes philosophical and expensive. Insurers default to tightening language because the investigative surface is too fragile.
4) The loss types sprawl across towers
One incident can trigger multiple lines at once: EPLI for bias, Tech E&O for output errors, D&O for governance failures, Cyber for data exposure, GL for advertising injury theories. That cross-line coupling is how portfolios get surprised.
This is why the market is drifting toward E&S and specialty forms. Not because those lines magically “understand AI,” but because they are structurally built to impose conditions, price uncertainty, sublimit exposure, and negotiate bespoke boundaries without breaking the distribution machine.
The commercial consequence: premium migration, not just exclusions
Once GenAI is carved out of baseline GL, demand doesn’t disappear. It relocates. The premium fight moves to whoever can write AI liability with enough specificity to be sellable and enough control to be survivable. The first land grab is already happening in three places:
Tech E&O as the most natural home for “AI output caused economic harm” claims
Cyber as the adjacent home for privacy, data misuse, and AI-enabled security failures
Standalone AI products as the fastest way to sell clarity into a gap that exclusions created
ISO didn’t just make it easier to exclude AI. It made it easier for a new AI liability market to form.
The Market Is Splitting Into Extremes
The clearest sign this is real is not what ISO published. It’s what carriers are already doing with their own paper. The market is forming around two instincts: eliminate AI ambiguity everywhere, or write affirmative AI coverage in narrow, controlled slices. Both approaches are rational. Neither is stable at the extreme. The next winners will be the ones who take the discipline of the exclusion camp and combine it with the monetization logic of the affirmative camp.
Extreme 1: The “Absolute Exclusion” posture
WR Berkley’s “absolute” AI exclusion is the cleanest expression of what this camp is trying to accomplish: remove AI from the coverage conversation entirely by making the exclusion attach to almost any plausible AI connection. It is designed less like a narrow exclusion and more like a kill switch across management and professional liability lines.
The breadth is the point. The language doesn’t just target bad AI outputs. It targets:
AI-generated content and communications
Failure to detect or identify third-party AI content
“Inadequate” AI policies, practices, procedures, or training
Products or services incorporating AI
Chatbot or virtual agent statements and representations
AI-related disclosures and statements about AI capabilities
Violations of AI-related laws and regulatory demands to investigate AI risk
That scope is not subtle. It reflects a view that AI will increasingly be impossible to separate from normal operations, so the only safe move is to make “any AI involvement” a coverage off-ramp.
The catch is enforceability and distribution. The more “absolute” the exclusion becomes, the more it invites disputes over causation and remoteness. If AI becomes a background tool across the enterprise, absolute language stops functioning as a risk carve-out and starts functioning as a practical withdrawal from the class of insureds you still want to write.
The “controlled affirmative” posture
At the same time exclusions are spreading, specialty markets are doing something more interesting: building insurance products that assume AI will be used, assume claims will happen, and try to price the narrow band of loss scenarios that can be bounded.
Testudo’s Lloyd’s-backed AI liability product is the cleanest example because it is designed around the actual first-wave allegation set, not around futuristic catastrophe narratives. The coverage is positioned as claims-made liability, with limits up to roughly the high single-digit millions, and it explicitly targets the exposures most likely to hit insureds that deploy GenAI in customer-facing workflows:
negligent AI errors and omissions causing third-party financial loss
IP infringement (copyright and trademark)
defamationhallucinations, malfunction, and model drift
unauthorized data disclosure tied to AI use
Just as important is what gets excluded. The product is aimed at AI deployers and users, not foundational model developers or vendors. That segmentation choice is underwriting strategy. It avoids the deepest part of the accumulation problem and focuses on the broader, more scalable middle market of enterprises using vendor AI tools.
The broader market is moving similarly. Armilla is taking a comparable approach with affirmative AI coverage tied to hallucinations, degradation, and malfunctions. Munich Re has been in the space longer, but with a different design logic: performance and reliability coverage anchored in technical validation and measurable KPIs.
The hybrid approaches are the most revealing
The most informative moves are not the extremes, but the compromises.
Chubb’s reported posture is effectively “some yes, systemic no.” Cover certain AI incidents, but exclude widespread events that impact many clients at once. That is an explicit acknowledgement that accumulation is the underwriting core of the problem, and it also foreshadows where policy language will evolve next: exclusions and sublimits tied to correlated events, shared model failures, or mass-impact triggers.
QBE’s approach is another kind of hybrid: affirmative AI-specific coverage that looks like expansion, but uses tight sublimits to control exposure, including AI regulatory risk components. That’s not marketing fluff. It’s a signal that carriers believe some AI exposures are insurable when they can be defined as discrete, bounded loss types.
What these extremes tell you about the next market clearing price
The market is not converging toward “AI coverage.” It is converging toward AI coverage with constraints that are legible to underwriters, brokers, and claims.
The near-term equilibrium looks like this:
baseline GL and broad liability towers tighten and exclude to avoid silent exposure
E&S markets intermediate the uncertainty with bespoke language and pricing
standalone AI products grow by selling clarity into the gap
hybrid forms emerge that cover isolated incidents but carve out systemic accumulation
The executive takeaway is simple:
If you are not planning for where AI liability sits in your portfolio, you’re going to inherit it accidentally, either through silent coverage you didn’t price, or through adverse selection when everyone else tightens and the worst risks come looking for the last carrier saying yes.
The Winning Formula: Controlled Coverage + Guardrails + Claims-Proof Evidence
There’s an understandable urge to treat this as a solvable design problem: tighten definitions, add underwriting controls, and write affirmative coverage that monetizes the gap. That direction is probably right, but it’s not risk-free, and it’s not proven yet. GenAI liability is still under-modeled, litigation-driven, and operationally hard to investigate in real time. So the posture here shouldn’t be certainty. It should be disciplined experimentation: write coverage you can explain, underwrite it with real guardrails, and assume the first wave of claims will test every assumption you made about causation, evidence, and aggregation.
What “controlled coverage” means in practice
This is less about having the “best” AI policy and more about avoiding two bad outcomes: writing something so broad it becomes unpriceable, or writing something so restrictive it becomes commercially irrelevant. Controlled coverage sits in the middle, where you can plausibly bind risk with a defensible theory of what you are covering and why.
In practice, controlled AI coverage tends to share four traits:
clear insuring triggers tied to identifiable failure modes
definitions that align with how enterprises actually deploy AI
systemic risk protections that limit portfolio accumulation
underwriting conditions that measurably reduce loss likelihood
Even then, this is not a guarantee of profitability. It’s a way to avoid being surprised by silent coverage or adverse selection.
Underwriting guardrails: what underwriters will increasingly ask for
Underwriting is shifting from “do you use AI?” to “how do you control it.” The controls being discussed are increasingly concrete and auditable, but the uncomfortable reality is that many insureds will have policies on paper long before they have true technical enforcement in place.
Governance and accountability
documented AI governance framework (model ownership, escalation, approvals)
human-in-the-loop protocols for higher-stakes decisions
clarity on executive accountability for control failures in regulated contexts
model cards or equivalent documentation (intended use, limits, known risks)
Vendor and last-mile control
• role-based restrictions on which tools and extensions can be used
controls that block uploading sensitive fields into third-party models
governed browsing sessions to isolate identity and credentials
third-party vendor documentation and governance artifacts
Agentic oversight
monitoring for autonomous agent failure modes and runtime threats
permissioning for agents that can execute actions, not just draft outputs
performance monitoring for drift and degradation over time
boundaries between assisted workflows and automated decisions
These guardrails don’t “solve AI risk.” They give carriers a basis to differentiate risks and avoid writing into environments where claims will be impossible to analyze.
Claims-proof evidence: what makes coverage defensible after the fact
In practice, many AI disputes won’t turn on whether AI was used. They’ll turn on whether anyone can reconstruct what happened. The evidence layer is becoming just as important as the wording layer.
The emerging standard is structured telemetry: logs that allow reconstruction of the incident and causation path.
Minimum evidence stack that will matter in disputes
identity and device posture
application and URL accessed
data classification context
action taken (copy/paste, upload, publish, deploy)
policy decision triggered (allow/block/flag)
prompt text where permissible, plus output metadata
model/software versioning at time of incident
retention and trace logs for system behavior over time
When this evidence exists, claims become adjudicable. When it doesn’t, AI claims become litigation-shaped and inconsistency-prone. That’s where underwriting intent gets separated from actual portfolio outcomes.
Risk segmentation: the underwriting move that will matter most
The market is moving away from treating “AI exposure” as one bucket. The segmentation logic being used is imperfect, but it’s directionally aligned with severity mechanics and plaintiff behavior.
Internal assistance tools produce different allegation patterns than systems that publish, recommend, decide, or communicate externally. Customer-facing exposure creates faster third-party harm and cleaner plaintiff narratives.
Agentic systems that execute actions create higher-severity failure modes and less settled legal attribution. That increases pricing uncertainty and increases the odds of multi-line tower disputes.
Vertical severity also matters. Healthcare and other bodily-injury-adjacent use cases behave differently than legal, insurance, and back-office automation where harm is more often economic.
Policy structuring: necessary, not sufficient
Explicit language is becoming table stakes to end silent AI. But exclusions and endorsements are not a substitute for risk selection. The more durable approach pairs language with operational controls and evidence requirements that make the product behave consistently in claims.
In practice, that usually means combining:
explicit definitions that reduce ambiguity
targeted grants that match real exposure categories
systemic protections against widespread events
sublimits and conditions that reflect what can be validated
The takeaway isn’t “this is the answer.” It’s that this is the most credible path the market is converging on, because it gives carriers a way to participate without pretending AI is either fully insurable or fully excludable.
What Executives Should Do Now
ISO’s January 2026 GenAI endorsements didn’t “change liability.” They industrialized the market’s ability to reassign it. Baseline GL tightens, AI exposure migrates into specialty towers, and the market splits between blunt exclusions and narrow affirmative coverage that still hasn’t been fully loss-tested.
3 Important Moves
1) Set your default posture on GL, not your opinion on AI
Decide what you will do at renewal on: Coverage B, completed ops, and any “arising out of GenAI” wording. If you don’t set the default centrally, it will get set for you account-by-account.
2) Build a broker-usable AI appetite map
Keep it simple and operational:
internal vs customer-facing AI
assisted workflows vs agentic systems
economic loss vs BI-adjacent severity
That’s enough to drive pricing, referrals, and consistency without pretending you can perfectly classify every deployment.
3) Require a minimum proof standard to make claims survivable
If you can’t reconstruct what happened, you don’t have “AI coverage,” you have litigation exposure. Make logging, versioning, and governance artifacts part of the underwriting file.
Bottom line
The carriers that win won’t be the ones that exclude the most or say yes the fastest. They’ll be the ones that define boundaries brokers can sell, control accumulation, and can prove causation when it matters.
About The Intelligence Council
The Intelligence Council publishes sharp, judgment-forward intelligence for decision-makers in complex industries. We serve founders, operators, strategists, and investors who need clarity. Our weekly briefs, deep dives, and sentiment indexes are built to help you make money, manage risk, and outthink competitors. No puff pieces. No b.s. Just the clearest signal in a noisy, complex world.
Our content for P&C Insurance spans the overall space, personal lines, commercial, and cyber. From market sensing to go-to-market clarity, we deliver the strategic signals leaders need to move first and act confidently.

