It’s not just humans vs. humans in cyber anymore – it’s AI vs. AI, as companies deploy defensive AI and criminals counter with offensive AI. In this evolving threat landscape, cyber insurance must also evolve. Call it Cyber Insurance 2.0: policies that explicitly address AI-on-AI threats and ensure coverage for incidents that involve advanced automation.
AI-Powered Attacks: The New Normal in Cyber Threats
Phishing & Social Engineering: Phishing remains the number one way bad actors breach organizations. But forget the clumsy, typo-filled scams of yesterday. Now, large language models can generate phishing emails that are highly personalized and grammatically perfect. They can mimic the tone of your boss or the vocabulary of your vendor. Attackers feed an AI information scraped from LinkedIn or past communications to craft an email that sounds just right. And they don’t do this one at a time – AI allows them to automate and scale. Studies have shown a massive spike (hundreds of percent increase) in phishing volume attributed to AI tools churning out credible bait.
Relatedly, deepfakes have emerged as a potent tool. We’ve seen cases where criminals cloned a CEO’s voice to call a subordinate, instructing them to wire money. With video deepfakes, an attacker could even simulate a live video call with a familiar face, perhaps to get a confidential briefing or authorize an unusual transaction. These are essentially supercharged social engineering attacks leveraging AI to exploit human trust in senses and relationships.
Malware & Exploitation: AI is also enhancing malware. For instance, malicious programs can use AI to decide in real-time how to avoid detection, by understanding the environment they are in (like a virus that morphs its behavior if it detects certain antivirus software). There’s talk of AI-driven bots that can scan networks and find vulnerabilities faster and more intelligently than any human hacker could. Also, automated exploit kits might adapt to the target – if one method fails, AI could help pick an alternative path with higher likelihood of success, all without manual control.
Adversarial AI and Data Poisoning: This is a newer angle where attackers target the AI models of organizations. For example, feeding bad data to an AI system (poisoning) so that it makes faulty decisions, or tricking a machine vision AI with manipulated inputs. Attackers might use their AI to find blind spots in your AI. This is truly AI-on-AI battle: think of an attacker using generative AI to come up with inputs that break a victim’s AI system (like prompt injection attacks on chatbots to get them to spill info).
Gaps in Traditional Cyber Coverage for AI-related Incidents
Traditional cyber insurance has come a long way in covering various risks: data breaches, ransomware, business interruption, cyber extortion, etc. However, AI-driven incidents can blur lines and introduce ambiguities:
- Social Engineering Fraud Coverage: Many cyber policies either exclude or sub-limit coverage for fraud induced by tricking employees (often this falls under crime insurance). If an employee is duped by an AI-crafted deepfake into sending money, is that a cyber incident or just human error? Some policies might only pay a limited amount (or not at all) unless the company bought a social engineering fraud rider. As deepfakes become more common, companies may not realize a standard cyber policy might not cover that scam fully. Insurers in 2.0 policies should ideally include broader cover for such scenarios, since they’re so prevalent.
- “Gray Area” Between Cyber and Crime: Deepfake or impersonation attacks often straddle cyber (there’s a tech component) and crime (fraud, theft) coverage. Claim handling can get messy if the cyber insurer says “this was voluntary transfer of funds, talk to your crime policy” and the crime insurer says “this was a cybersecurity failure, talk to cyber insurer.” Cyber 2.0 might mean integrating these or at least clarifying in one policy that both hacking and deception losses are covered, AI-assisted or not.
- Coverage of AI System Failures: Suppose a hospital’s chatbot gets hit by a prompt injection attack and leaks patient data. Traditional cyber covers data breaches, yes. But will an insurer consider a prompt injection (where no firewall was broken, but the AI was tricked via normal inputs) as a “covered breach”? Insurers must update terminology to ensure that any unauthorized data disclosure, even through an AI’s logic flaw, triggers coverage. Also, if an AI malfunction (perhaps due to malicious input) causes downtime or loss, that should trigger business interruption coverage. If policy language is old, it might require “malicious code” or “security failure” as a trigger. Prompt injection isn’t classic malware; it’s abusing a feature. Insurers are indeed looking at this and some are adding language to affirm coverage for AI-related incidents like this.
- Exclusions for AI Content: We’ve seen some insurers consider excluding liability for content created by AI (worrying about defamation, IP infringement, etc.). Cyber policies often cover media liability or wrongful acts like privacy violations. If an AI does something like publish slander or expose private info, insurers need to decide to cover it or exclude it. A forward-looking policy likely covers it, understanding that AI is part of operations now, not some exotic external force.
- State-Sponsored AI Attacks: A lot of advanced attacks (with AI or not) may involve nation-state actors or APT groups. Some insurers exclude “acts of war” or nation-state cyberattacks explicitly. The tricky part is if AI just amplifies everyday crime, that’s fine, but if a nation-state uses AI to attack infrastructure, insurers might invoke war exclusions. Cyber 2.0 policies might refine these clauses, perhaps covering more state-linked incidents except full-scale cyberwar, or providing specific endorsements for critical sectors concerned about that.
How Cyber Insurance is Adapting
Insurance providers are not sitting idle:
- Explicit Deepfake Coverage: Some are adding wording to cover impersonation and deepfake scams under the main cyber policy rather than treating them as uncovered fraud. For example, covering the financial loss if an employee is tricked by a fake audio or video, up to certain limits, possibly after some verification procedure (insurers might still require that the client have some verification controls or training in place).
- AI Incident Definition: Policies are updating definitions of a covered security event to include things like “unauthorized access, use, or manipulation of an Insured’s computer system, including any artificial intelligence system or algorithm, that results in…” etc. This ensures that if your AI gets manipulated, it’s considered a security incident.
- Coverage for Model and Data Integrity: We might see new coverage grants for things like “loss or corruption of training data” or “malicious alteration of algorithms.” For instance, if an attacker somehow tampered with your AI model (which could be a new kind of sabotage), it might cause lots of downstream loss. Insurers could cover the cost to restore the model and any business losses during the period it was faulty.
- First-party and Third-party Mix: Many AI-related attacks cause both first-party costs (investigation, recovery, business interruption) and third-party liability (privacy breach, harm to customers). Cyber policies are already structured to handle both, but the key is ensuring no gap. For example, if a prompt attack causes a privacy breach, the policy should pay for forensic investigation, customer notification, credit monitoring (first-party costs), and also defend and indemnify you if customers sue for the breach (third-party). Cyber 2.0 policies explicitly list these scenarios.
- Incident Response Specialization: Insurers are partnering with incident response firms who have AI expertise. When a client has a deepfake or AI phishing incident, having experts who know how to trace it, or law enforcement contacts for such scams, is valuable. Some insurers offer a 24/7 breach hotline – now they’ll have to field calls not just about ransomware, but “we think our CFO was deepfaked.” Having playbooks for that is new territory. For example, in a fund transfer via deepfake scenario, the response might involve quickly contacting banks and law enforcement to claw back funds (some insurers have had success recovering money if notified swiftly).
- Security Services and Tools: The best cyber insurers try to reduce claims by offering or requiring security measures. In era of AI threats, that could include:
- Requiring or incentivizing multi-factor authentication for financial transactions (so even if someone is deepfaked, a second factor might stop a transfer).
- Providing deepfake detection tools or training: insurers might supply clients with software that can help detect synthetic audio/video or encourage them to have verification protocols for unusual requests (like a safe phrase or secondary channel confirmation).
- Recommending or mandating keeping AI systems updated and tested (for example, if you deploy an AI chatbot, insurers might expect you to follow OWASP’s AI security guidelines).
- Pushing the concept of “trust but verify” for AI outputs in sensitive processes. If an AI is in a security role (like monitoring logs), maybe require a periodic human review to ensure the AI itself isn’t compromised.
AI-on-AI Combat: Defense Meets Offense
We should also consider how insurers will view the interplay of defensive AI. Many organizations deploy AI for cybersecurity (user behavior analytics, anomaly detection, automated incident response). But what if an attacker’s AI outsmarts or corrupts the defender AI? For instance:
- Adversaries might intentionally feed misinformation to a company’s threat detection AI so it either triggers false alarms (causing costly disruptions) or misses real attacks.
- Or, if an insurer gives a discount because a company uses an AI-driven security platform, what if that platform fails spectacularly against an AI attack? Insurers might then calibrate their models: “AI-based defenses reduce some risks but introduce others.”
Cyber Insurance 2.0 might include failure of security AI as a peril. Typically, if a security control fails, it’s not excluded per se, unless it was gross negligence or misrepresentation by the insured (e.g., lying about having it). But insurers could tighten underwriting: e.g., if a client relies heavily on AI defenses, they might ask, “What if that fails? Do you have human oversight? A backup system?” If a loss occurred because the AI defense glitched, the claim would still be paid, but it’s a learning point for insurers to refine future underwriting.
Regulatory Environment and Insurance
Regulators (like the SEC, EU bodies, etc.) are starting to take note of AI as well. For instance, if deepfake scams become rampant, we might see regulatory guidance that companies need internal controls to verify identities, etc. If companies fail and get penalized, insurance might cover those compliance failures if the policy is broad enough (some cyber policies cover certain fines if allowed). Cyber Insurance 2.0 could explicitly mention coverage for regulatory investigations or fines arising from AI-related incidents (e.g., if data was exposed via an AI, or if the company is found to not have proper safeguards against AI fraud as required by a future law).
One tangible example: The SEC in the US has new cyber disclosure requirements for public companies. If an AI-driven incident happens, companies must disclose it. If they handle it poorly and face shareholder suits or SEC inquiries, insurers will cover the defense under D&O or cyber depending on what it is. Some insurance offerings now highlight they cover the costs of navigating those disclosures and potential liabilities around them.
The Need for Speed and Adaptability
AI attacks can happen at machine speed – like automated spear phishing waves or instantaneous deepfake deployment. Insurers usually handle incidents case-by-case with human adjusters and panels of experts. In the future, possibly insurers might also use AI on their end: for example, to triage incidents, predict which might escalate, or even to guide clients in real-time (imagine an insurer-provided AI assistant that helps a client through the first moments of a deepfake fraud discovery: “Step 1: isolate the communication, Step 2: notify bank…”).
But one of the biggest challenges is data: these AI-related incidents are relatively new, so actuarial data is thin. Insurers in the 2.0 phase are basically making educated guesses and closely watching claim trends. Pricing might be volatile until there’s more clarity. If deepfake fraud losses mount, premiums for that coverage will go up or terms might tighten.
Summing Up: The Policy of Tomorrow
A truly forward-looking cyber insurance policy crafted for AI threats might include an “AI Threat Endorsement” listing:
- Coverage for losses from deepfake or AI-augmented social engineering, including direct financial loss and costs to respond.
- Affirmation that any incident involving AI manipulation of systems is treated as a covered cyber event (not excluded as “internal error” or something).
- Perhaps a carve-back of exclusions: e.g., if there’s an exclusion for wire transfer fraud generally, the endorsement might give back coverage when the fraud used deepfake tech to circumvent controls.
- Additional sublimit for reputational damage mitigation: In case a deepfake is used to smear a company (like a fake video of a CEO saying something terrible goes viral), some policies might cover PR and crisis management costs. This crosses into reputational insurance, but cyber policies have started to cover some PR costs after breaches – this could extend to deepfake scandals.
- Cooperation clauses about AI data: Insurers might require access to AI logs or systems post-incident to better analyze what happened (this helps them learn and also verify claim details).
Cyber Insurance 2.0 is not a complete reinvention – it’s an adaptation. The core remains the same: transferring risk of unexpected tech-driven losses. But the details and emphases are shifting. Insurers are learning that an email can be a weapon of deception like never before, and a “secure system” can be unwound by a cleverly constructed sentence fed to an AI. Therefore, policies and services around them are becoming both more explicit and more holistic:
- Explicit in naming perils (no more assuming “phishing” is covered, they’ll say it).
- Holistic in addressing prevention, incident response, and recovery as a package (since fighting AI with AI may be necessary).
In conclusion, businesses facing AI-driven attacks should review their insurance coverage with these new threats in mind. Does your policy mention social engineering, or does it exclude “voluntary transfer”? Do you have coverage if your chatbot misbehaves due to a hack? If not, talk to your insurer about enhancements. Meanwhile, adopt strong security practices: employee training to spot deepfakes, verification steps for sensitive actions, robust monitoring of AI systems, etc. Insurance works best as part of a layered defense, not the only defense.
Cybercriminals may have shiny new AI arrows in their quiver, but with Cyber Insurance 2.0 and solid cyber hygiene, businesses can stay resilient. It’s an arms race, yes, but not a hopeless one. Every attack method AI enables, AI can also help defend – and insurance is there to cover the residual risk that gets through. The policy of the future is being written now, in real time, as we encounter these novel threats and figure out how to insure against digital foes that think and learn at machine speed.