Anthropic complained to the US government about CEO of the Chinese company who took stage at Nvidia’s annual conference; and now US State Department has ordered …

Moonshot AI who CEO and founder Zhilin Yang The US State Department has ordered a global push to bring attention to what it says are … Read more

Anthropic complained to the US government about CEO of the Chinese company who took stage at Nvidia's annual conference; and now US State Department has ordered ...
Moonshot AI who CEO and founder Zhilin Yang

The US State Department has ordered a global push to bring attention to what it says are widespread efforts by Chinese companies, including AI startup DeepSeek, to steal intellectual property from US artificial intelligence labs. The cable, dated Friday and sent to diplomatic and consular posts around the world, instructs diplomatic staff to speak to their foreign counterparts about “concerns over adversaries’ extraction and distillation of US AI models.” For those un aware, Distillation is the process of training smaller AI models using output from larger, more expensive ones as part of an effort to lower the ⁠costs of training a powerful new AI tool. The US government order follows a complaint where Anthropic accused three prominent Chinese AI companies of using its Claude chatbot on a massive scale to secretly train rival models. In a blog post, San Francisco–based Anthropic alleged that Chinese labs DeepSeek, Moonshot AI, and MiniMax violated corporate law by interacting with Claude, its market-reshaping vibe-coding tool. Incidentally, among the companies that Anthropic and the US government have accused is Moonshot AI whose CEO and founder Zhilin Yang took the stage at Nvidia’s biggest annual event of the year. Zhilin Yang was a speaker at Nvidia’s GTC 2026 annual event.

what Anthropic complaint said on Moonshot AI

In its open blog, Anthropic said, “We have identified industrial-scale campaigns by three AI laboratories — DeepSeek, Moonshot, and MiniMax — to illicitly extract Claude’s capabilities to improve their own models. These labs generated over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, in violation of our terms of service and regional access restrictions.”The complaint letter/blog also went on to describe the technique that Moonshot AI use. “The three distillation campaigns detailed below followed a similar playbook, using fraudulent accounts and proxy services to access Claude at scale while evading detection. The volume, structure, and focus of the prompts were distinct from normal usage patterns, reflecting deliberate capability extraction rather than legitimate use,” it said. Moonshot AI Scale: Over 3.4 million exchangesThe operation targeted:Agentic reasoning and tool useCoding and data analysisComputer-use agent developmentcomputer visionMoonshot (Kimi models) employed hundreds of fraudulent accounts spanning multiple access pathways. Varied account types made the campaign harder to detect as a coordinated operation. We attributed the campaign through request metadata, which matched the public profiles of senior Moonshot staff. In a later phase, Moonshot used a more targeted approach, attempting to extract and reconstruct Claude’s reasoning traces.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

About the Author

Easy WordPress Websites Builder: Versatile Demos for Blogs, News, eCommerce and More – One-Click Import, No Coding! 1000+ Ready-made Templates for Stunning Newspaper, Magazine, Blog, and Publishing Websites.

BlockSpare — News, Magazine and Blog Addons for (Gutenberg) Block Editor

Search the Archives

Access over the years of investigative journalism and breaking reports