Anthropic is accusing three Chinese language AI corporations of establishing greater than 24,000 pretend accounts with its Claude AI mannequin to enhance their very own fashions.
The labs — DeepSeek, Moonshot AI, and MiniMax — allegedly generated greater than 16 million exchanges with Claude by these accounts utilizing a method known as “distillation.” Anthropic mentioned the labs “focused Claude’s most differentiated capabilities: agentic reasoning, device use, and coding.”
The accusations come amid debates over how strictly to implement export controls on superior AI chips, a coverage geared toward curbing China’s AI growth.
Distillation is a typical coaching methodology that AI labs use on their very own fashions to create smaller, cheaper variations, however opponents can use it to primarily copy the homework of different labs. OpenAI despatched a memo to Home lawmakers earlier this month accusing DeepSeek of utilizing distillation to imitate its merchandise.
DeepSeek first made waves a yr in the past when it launched its open-source R1 reasoning mannequin that just about matched American frontier labs in efficiency at a fraction of the price. DeepSeek is predicted to quickly launch DeepSeek V4, its newest mannequin, which reportedly can outperform Anthropic’s Claude and OpenAI’s ChatGPT in coding.
The size of every assault differed in scope. Anthropic tracked greater than 150,000 exchanges from DeepSeek that appeared geared toward bettering foundational logic and alignment, particularly round censor-ship secure alternate options to policy-sensitive queries.
Moonshot AI had greater than 3.4 million exchanges focusing on agentic reasoning and power use, coding and knowledge evaluation, computer-use agent growth, and pc imaginative and prescient. Final month, the agency launched a brand new open supply mannequin Kimi K2.5 and a coding agent.
Techcrunch occasion
Boston, MA
|
June 9, 2026
MiniMax’s 13 million exchanges focused agentic coding and power use and orchestration. Anthropic mentioned it was in a position to observe MiniMax in motion because it redirected practically half its visitors to siphon capabilities from the most recent Claude mannequin when it was launched.
Anthropic says it can proceed to put money into defenses that make distillation assaults more durable to execute and simpler to determine, however is asking on “a coordinated response throughout the AI business, cloud suppliers, and policymakers.”
The distillation assaults come at a time when American chip exports to China are nonetheless hotly debated. Final month, the Trump administration formally allowed U.S. corporations like Nvidia to export superior AI chips (just like the H200) to China. Critics have argued that this loosening of export controls will increase China’s AI computing capability at a important time within the international race for AI dominance.
Anthropic says that the dimensions of extraction DeepSeek, MiniMax, and Moonshot carried out “requires entry to superior chips.”
“Distillation assaults subsequently reinforce the rationale for export controls: restricted chip entry limits each direct mannequin coaching and the dimensions of illicit distillation,” per Anthropic’s weblog.
Dmitri Alperovitch, chairman of the Silverado Coverage Accelerator think-tank and co-founder of CrowdStrike, informed TechCrunch he’s not stunned to see these assaults.
“It’s been clear for some time now that a part of the explanation for the speedy progress of Chinese language AI fashions has been theft by way of distillation of US frontier fashions. Now we all know this for a truth,” Alperovitch mentioned. “This could give us much more compelling causes to refuse to promote any AI chips to any of those [companies], which might solely benefit them additional.”
Anthropic additionally mentioned distillation doesn’t solely threaten to undercut American AI dominance, however may additionally create nationwide safety dangers.
“Anthropic and different U.S. corporations construct methods that forestall state and non-state actors from utilizing AI to, for instance, develop bioweapons or perform malicious cyber actions,” reads Anthropic’s weblog publish. “Fashions constructed by illicit distillation are unlikely to retain these safeguards, which means that harmful capabilities can proliferate with many protections stripped out completely.”
Anthropic pointed to authoritarian governments deploying frontier AI for issues like “offensive cyber operations, disinformation campaigns, and mass surveillance,” a threat that’s multiplied if these fashions are open-sourced.
TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for remark.

