Tony Kim
Feb 23, 2026 18:32
Anthropic reveals DeepSeek, Moonshot, and MiniMax ran industrial-scale distillation assaults utilizing 24,000 pretend accounts to steal Claude AI capabilities.
Anthropic dropped a bombshell Tuesday, publicly naming three Chinese language AI laboratories—DeepSeek, Moonshot, and MiniMax—as perpetrators of coordinated campaigns to steal Claude’s capabilities by over 16 million fraudulent API exchanges.
The assaults used roughly 24,000 pretend accounts to avoid Anthropic’s regional entry restrictions and phrases of service. One proxy community alone managed greater than 20,000 simultaneous fraudulent accounts, mixing distillation visitors with reliable requests to evade detection.
The Numbers Inform the Story
MiniMax led the assault with 13 million exchanges focusing on agentic coding and power orchestration. Moonshot adopted with 3.4 million exchanges centered on computer-use agent improvement and reasoning capabilities. DeepSeek’s marketing campaign, whereas smaller at 150,000 exchanges, employed notably refined strategies—together with prompts designed to make Claude articulate its inner reasoning step-by-step, primarily producing chain-of-thought coaching information on demand.
Anthropic traced a number of DeepSeek accounts on to particular researchers on the lab by request metadata evaluation.
Why This Issues Past Company Espionage
The timing right here is not coincidental. OpenAI publicly accused DeepSeek of distilling ChatGPT simply three days earlier on February 21. Google’s Risk Intelligence Group flagged elevated distillation exercise on February 16, together with a marketing campaign utilizing over 100,000 prompts to copy Gemini’s reasoning skills.
What makes this notably regarding? Anthropic argues these assaults undermine U.S. export controls on superior chips. Overseas labs can successfully bypass innovation necessities by extracting capabilities from American fashions—they usually want these restricted chips to run distillation at scale anyway.
“Illicitly distilled fashions lack essential safeguards,” Anthropic warned, noting stripped-out protections might allow “offensive cyber operations, disinformation campaigns, and mass surveillance” by authoritarian governments.
The Hydra Downside
Anthropic described the infrastructure enabling these assaults as “hydra cluster” architectures—sprawling networks with no single level of failure. Ban one account, one other spawns instantly. The proxy companies reselling Claude entry made detection exponentially tougher by distributing visitors throughout Anthropic’s API and third-party cloud platforms concurrently.
When Anthropic launched a brand new Claude mannequin throughout MiniMax’s lively marketing campaign, the lab pivoted inside 24 hours, redirecting practically half their visitors to seize the most recent capabilities. That form of operational agility suggests these aren’t opportunistic assaults however sustained, well-resourced operations.
Anthropic’s Countermeasures
The corporate outlined a number of defensive measures: behavioral fingerprinting methods to detect distillation patterns, strengthened verification for instructional and startup accounts (probably the most generally exploited pathways), and model-level safeguards designed to degrade output high quality for illicit extraction with out affecting reliable customers.
Anthropic is sharing technical indicators with different AI labs, cloud suppliers, and authorities authorities. The message is obvious: this requires industry-wide coordination.
For traders monitoring AI infrastructure performs, this escalation provides one other variable to the aggressive panorama. Labs that may’t defend their fashions danger watching their R&D investments stroll out the door—16 million queries at a time.
Picture supply: Shutterstock
