Anthropic alleges Chinese AI firms scraped Claude amid US chip export debate
Anthropic claims Chinese AI labs mined data from Claude as US officials weigh tighter AI chip export controls, escalating tensions in the global AI race.
Anthropic says three Chinese AI labs built a large network of fake accounts to access its Claude model and use the outputs to strengthen their own systems, bringing renewed attention to the issue of “distillation” at a moment when U.S. policy over advanced chip exports is under intense scrutiny.
According to Anthropic, the companies — DeepSeek, Moonshot AI, and MiniMax — created more than 24,000 fraudulent Claude accounts. Through those accounts, Anthropic claims the firms produced over 16 million back-and-forth interactions with Claude as part of a distillation effort. The company said these activities focused on what it considers Claude’s most distinctive strengths, including agentic reasoning, tool use, and coding.
The allegations arrive as U.S. officials, lawmakers, and industry leaders continue debating how tightly the government should enforce controls on exports of advanced AI chips — rules intended to limit China’s ability to scale frontier AI development.
Distillation itself is a standard technique in AI development. Labs often use it internally to build smaller, cheaper models that retain much of the performance of a larger model. But when a competitor uses distillation against another company’s model, it can function as a shortcut — effectively leveraging another lab’s outputs as training signals. Earlier this month, OpenAI sent a memo to House lawmakers alleging that DeepSeek used distillation methods to imitate OpenAI’s products.
DeepSeek became widely known about a year ago when it released its open source R1 reasoning model, which performed close to U.S. frontier systems while costing far less to run. DeepSeek is also expected to release DeepSeek V4 soon, a model reported to outperform Anthropic’s Claude and OpenAI’s ChatGPT on coding tasks.
Anthropic says the three alleged efforts differed in both size and what capabilities they appeared designed to extract.
The company tracked more than 150,000 exchanges linked to DeepSeek, which Anthropic believes were aimed at strengthening foundational logic and alignment, including work around developing censorship-safe responses to policy-sensitive prompts.
Anthropic says Moonshot AI generated more than 3.4 million exchanges focused on agentic reasoning and tool use, as well as coding, data analysis, agent development for computer use, and computer vision. Last month, Moonshot released a new open-source model, Kimi K2.5, along with a coding agent.
In Anthropic’s account, MiniMax produced the largest volume — around 13 million exchanges — aimed at agentic coding, tool use, and orchestration. Anthropic also claims it observed MiniMax shifting behaviour when Claude’s newest model launched, saying the company redirected nearly half of its traffic to extract capabilities from the updated system as soon as it became available.
Anthropic says it will keep investing in safeguards that make distillation attempts more difficult to carry out and easier to detect. At the same time, the company is urging a “coordinated response across the AI industry, cloud providers, and policymakers.”
These claims fall amid a heated dispute over U.S. chip exports to China. Last month, the Trump administration formally allowed U.S. firms such as Nvidia to export advanced AI chips — including models like the H200 — to China. Critics of the move say easing restrictions expands China’s AI computing capacity at a crucial stage in the global competition for AI leadership.
Anthropic argues that the alleged distillation at this scale would not be possible without significant computing. The company says the amount of extraction attributed to DeepSeek, Moonshot, and MiniMax “requires access to advanced chips.”
“Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” Anthropic wrote in its blog post.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and the co-founder and former CTO of CrowdStrike, said that he is not surprised by the accusations.
“It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of U.S. frontier models. Now we know this for a fact,” Alperovitch said. “This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
Anthropic also argues the issue goes beyond competitive concerns, warning that distillation could create broader security risks. The company says it, along with other U.S. AI developers, builds systems designed to prevent both state and non-state actors from using models for activities such as developing biological weapons or conducting malicious cyber operations. In Anthropic’s view, models produced through illicit distillation may notretaind those safety guardrails, meaning dangerous capabilities could spread as protections are weakened or removed.
“Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities,” Anthropic’s blog post says. “Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”
The company also pointed to the risk that authoritarian regimes could use frontier AI for purposes such as “offensive cyber operations, disinformation campaigns, and mass surveillance.” It said those concerns could intensify if powerful models are open-sourced and widely distributed.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0