Anthropic leaks secret Claude AI code in human error mishap
Amazon-backed Anthropic leaked 500,000 lines of internal Claude Code due to human error, exposing unreleased features and performance data in a major security lapse.
AI giant Anthropic, backed by Amazon, has mistakenly published its highly confidential internal code, triggering a viral wave of GitHub rewrites and potentially inflicting catastrophic commercial damage. The developer of the Claude chatbot described the incident as a release issue "caused by human error, not a security breach," according to US technology news website VentureBeat on Tuesday.
The leak involved over 500,000 lines of code linked to Claude Code, Anthropic's AI coding assistant, which helps users write and manage software through natural language commands, according to Axios and The Verge. The material included unreleased features, performance data, and developer notes. The code spread rapidly online, with versions placed on code-sharing platform GitHub and replicated thousands of times within hours, according to Ars Technica and The Verge.
Anthropic moved to remove the material and issued takedown notices, but it had already been widely copied and circulated, the reports said. According to VentureBeat, by exposing the "blueprints" of Claude Code, the leak may have given "bad actors" a "road map" to bypassing security checks or tricking the tool into running hidden commands or accessing data without the user's knowledge.
Anthropic was designated a "risk to national security" by US Defense Secretary Pete Hegseth in February after disagreements with the Pentagon over the use of its artificial intelligence systems. A separate data leak reported in February exposed internal materials revealing details of Anthropic's unreleased model, known as Claude Mythos, after thousands of draft documents were left accessible in a public data cache.