

On 23 February 2026, the artificial intelligence industry was rocked by a major, highly public controversy initiated by US-based AI firm Anthropic. The company formally accused three prominent Chinese AI laboratories DeepSeek, Moonshot AI, and MiniMax of orchestrating what it termed an “industrial-scale distillation attack” against its flagship large-language model, Claude. These severe allegations have immediately intensified global tensions surrounding intellectual property (IP), security, and competitive strategies within the relentlessly competitive and rapidly advancing AI landscape. The incident not only highlights a new frontier in cyber-economic warfare but also raises critical questions about ethical practices and national security risks associated with the development of frontier AI models.
The Scale of the Alleged Attack
Anthropic’s detailed report lays out a systematic and coordinated effort by the three accused companies to illicitly exploit and misuse Claude’s sophisticated outputs with the explicit goal of rapidly training and advancing their own competing models. The scale of the alleged operation is staggering. Anthropic estimates that the rival labs collectively generated over 16 million interactions with Claude. This massive, unauthorised campaign was allegedly executed using a network of approximately 24,000 fraudulent accounts. This level of activity represents a blatant and significant violation of Anthropic’s terms of service, as well as regional access restrictions put in place to protect its technology.
Technical Core: Model Distillation and Targeted IP Theft
The technical core of the accusation centres on the practice of model distillation. In principle, distillation is a legitimate technique where a smaller, more efficient "student" model is trained to mimic the complex, high-quality outputs of a larger, more powerful "teacher" model—in this case, Claude. However, Anthropic asserts that the scale, the unauthorised nature, and the methodology of the accused Chinese firms' campaign transcended both legal boundaries and accepted ethical standards. The firm specifically claims that the attackers focused their efforts on targeting Claude's most advanced and commercially valuable functionalities, including sophisticated agentic reasoning, cutting-edge coding proficiency, and complex tool use capabilities. These features represent hundreds of millions of pounds in research investment by Anthropic.
Evidence and Sophistication of Attackers
The report provides specific, granular evidence of the alleged illicit activity. For example, DeepSeek is accused of performing over 150,000 interactions that were highly focused on eliciting complex, multi-step reasoning outputs from Claude. This suggests a targeted effort to generate valuable “chain-of-thought” training data, which is essential for developing highly capable AI systems. Furthermore, the attackers demonstrated a high degree of technical sophistication and organisation in their evasion tactics. They allegedly adapted rapidly to new versions of the Claude model and successfully used proxy services and load-balanced networks to mask their origins and evade Anthropic’s internal security measures, underscoring the premeditated and professional nature of the operation.
Global Ramifications and Safety Concerns
The public disclosure of Anthropic's claims has sent profound shockwaves throughout the global AI community, immediately intensifying already fractious debates over intellectual property protection and the intersection of AI with national security. Anthropic issued a strong warning that models developed through such large-scale, unauthorised distillation may lack crucial safety guardrails. Without the proper ethical and safety fine-tuning inherent in legitimate model development, these distilled systems, according to Anthropic, would be more prone to misuse, potentially facilitating the rapid generation of highly potent malware, advanced phishing campaigns, or widespread, sophisticated disinformation.
Bypassing Export Controls
Moreover, this incident highlights a critical and potentially dangerous loophole in current US-led export controls on advanced AI chips. The restrictions are primarily focused on preventing Chinese entities from acquiring the specialised hardware (like NVIDIA GPUs) needed to train a frontier model from scratch. However, the distillation attack demonstrates a sophisticated method to gain access to the intelligence of a frontier model by exploiting its outputs, effectively bypassing the hardware restriction. The situation is fundamentally complicated by the ongoing, overarching geopolitical technology competition that pits Western AI sectors against their Chinese counterparts. While the accused Chinese firms have not yet issued immediate, formal public responses to Anthropic’s specific allegations, previous statements from DeepSeek have included denials of using synthetic outputs from other frontier models for their internal training.
Call for Enhanced Governance and International Norms
Leading AI policy experts and governance scholars argue that this high-stakes episode underscores the critical and urgent need for enhanced AI model governance and robust security protocols. As the commercial value and strategic importance of large language models continue to soar into the trillions, the implementation of sophisticated protective mechanisms, such as aggressive, real-time behavioural fingerprinting, multi-factor authentication, and advanced anomaly detection systems is no longer optional but essential for the survival of proprietary AI firms.
There is a growing and unified international call for the establishment of international norms and cooperative frameworks specifically designed to regulate the acceptable use of AI model outputs and the ethical practices governing training data globally. Without the rapid development and adoption of such agreements, the global AI ecosystem faces a significant risk of severe fragmentation, as companies will increasingly be forced to resort to implementing tighter, more restrictive proprietary defences and closed-off systems.
Anthropic's Response and Future Policy
In direct response to the attack, Anthropic has already implemented a series of aggressive countermeasures, including the aforementioned behavioural fingerprinting techniques and much stricter account verification protocols. Concurrently, the company has announced its intention to support and advocate for stronger policy interventions at the national and international levels. This entire incident serves as a powerful and unambiguous illustration of the high stakes in modern AI development and the complex, deeply interwoven interplay between fierce commercial rivalry, escalating national security concerns, and the fundamental need for robust ethical governance. The "industrial-scale distillation attack" represents a new and challenging chapter in the global race for artificial intelligence supremacy.