11/6/25

By: Jason G. Weiss
A new report from Anthropic, a leading Artificial Intelligence (AI) company, highlights a troubling development in the cybersecurity landscape: threat actors are now using AI to automate nearly every step of a cyber extortion scheme. According to Anthropic, at least 17 companies were targeted in what they call the most comprehensive AI-driven cybercrime operation to date.
In this case, the attacker allegedly used Claude Code, Anthropic’s coding assistant, to:
• Identify vulnerable companies.
• Develop malicious software to steal data.
• Organize and analyze stolen files to pinpoint sensitive material.
• Review financial records to calculate realistic ransom demands.
• Draft extortion emails requesting payment in bitcoin.
This attack represents the emergence of “Agentic AI” — autonomous AI systems capable of planning and executing complex tasks as part of a cyber-attack without constant human oversight. It also marks the first publicly documented instance of a leading AI platform being exploited to conduct a full-scale cyber extortion campaign.
According to the BBC, the threat actors used Claude to “make both tactical and strategic decisions, such as deciding which victims to exfiltrate and how to craft psychologically targeted extortion demands.” Agentic AI even recommended appropriate ransom amounts for victims to pay.
Anthropic also reported detecting the use of autonomous “vibe hacking,” in which AI autonomously writes code to infiltrate organizations — in this case, 17 different targets, including government agencies. Vibe hacking refers to the use of AI to coordinate and execute simultaneous, multi-faceted cyberattacks. Wired magazine has described vibe hacking as the “next AI nightmare.”
Cyber extortion threats are not new, but AI makes them faster, more efficient and exponentially harder to detect. This evolution raises significant concerns for legal, compliance and security professionals alike:
• Heightened Risk: AI enables hackers to scale attacks in ways previously impossible.
• Reputational Damage: Leaked trade secrets or customer data can severely harm brand trust.
• Compliance Pressure: Regulators are increasingly scrutinizing how companies prepare for and respond to AI-enabled threats.
To stay ahead of these risks, organizations should take proactive measures:
AI presents extraordinary opportunities for innovation and business growth — but it is also being weaponized by cybercriminals. Organizations that adapt their security and compliance frameworks to the realities of AI-driven threats will be best positioned to safeguard their data, reputation and bottom line.
For more information, please contact Jason G. Weiss at jason.weiss@fmglaw.com or your local FMG attorney.
Information conveyed herein should not be construed as legal advice or represent any specific or binding policy or procedure of any organization. Information provided is for educational purposes only. These materials are written in a general format and not intended to be advice applicable to any specific circumstance. Legal opinions may vary when based on subtle factual distinctions. All rights reserved. No part of this presentation may be reproduced, published or posted without the written permission of Freeman Mathis & Gary, LLP.
Share
Save Print