Anthropic revealed details of Chinese AI lab attacks on its model
3 hours ago • Евгения Слив

Anthropic accused three Chinese startups - DeepSeek, Moonshot and MiniMax - of organizing a massive data extraction campaign from the Claude model. Using around 24,000 fraudulent accounts, they generated more than 16 million chat bots to improve their own neuronetworks by distillation. The company emphasized that this approach violates usage conditions and regional limitations, and allows competitors to copy advanced opportunities without significant development costs.
According to Anthropic, illegally distilled models do not inherit the necessary protective mechanisms, which poses a direct threat to national security. Foreign laboratories can integrate unprotected AI, using it for cyber attacks, disinformation, and mass surveillance. The company has called on industry and regulators to take swift, coordinated action to counter such attacks.
To combat distillation, Anthropic is improving its pattern detection systems, exchanging technical indicators with partners and strengthening the verification of educational and research accounts. The company also supports export restrictions on advanced chips, noting that rapid progress of Chinese laboratories depends in large part on extracting opportunities from US models. Previously, similar suspicions arose regarding DeepSeek after the release of its R1 model in January 2025.
