Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

Anthropic Accuses Chinese AI Labs of Coordinated Campaigns to Siphon Intellectual Property

Anthropic has publicly accused three prominent Chinese AI laboratories—DeepSeek, Moonshot AI, and MiniMax—of orchestrating coordinated campaigns to siphon capabilities from its Claude models. The San Francisco-based company revealed that the labs generated over 16 million exchanges with Claude through fraudulent accounts, violating Anthropic’s terms of service. These campaigns highlight the controversial practice of distillation, where foreign competitors extract knowledge from advanced AI models to leapfrog years of research and investment.

The escalating tensions between American and Chinese AI developers have raised concerns about national security implications. Anthropic’s CEO, Dario Amodei, has been advocating for restricting chip sales to China, linking Monday’s revelations to this policy debate. The disclosure sheds light on how distillation, once an academic technique, has become a geopolitical flashpoint in the global AI race.

Understanding AI Distillation and Its Implications

Distillation involves extracting knowledge from a larger AI model to create a smaller, more efficient one. While a legitimate training method, distillation can be exploited by competitors to capture proprietary capabilities. The revelation of these coordinated distillation attacks by Chinese labs has sparked a debate on export controls and national security risks associated with illicitly distilled models.

Anthropic’s detailed account of the campaigns led by DeepSeek, Moonshot AI, and MiniMax highlights the challenges in enforcing intellectual property rights in the AI landscape. The company’s national security argument underscores the need for coordinated action to address these threats effectively.

Implications for the AI Industry and Policy Landscape

Anthropic’s call for a multipronged defensive response reflects the urgency of addressing distillation attacks. The disclosure is expected to impact ongoing policy debates on chip export controls and national security measures. The AI industry must prioritize API security to safeguard against similar attacks and ensure the integrity of intellectual property.

As the industry navigates the complex legal landscape surrounding AI distillation, the focus on national security implications may shape future policy decisions and industry practices. Anthropic’s revelations serve as a wake-up call for AI labs to enhance security measures and collaborate on mitigating the risks associated with intellectual property theft through distillation.