Hackers are circulating files claimed to contain leaked source code from Anthropic's Claude AI system, with malware bundled inside the packages — turning curiosity about a high-profile leak into a direct security threat for anyone who downloads the files, according to reporting by Wired.
The tactic is a well-established one in cybercriminal circles: exploit public interest in a sensitive leak to distribute malicious payloads. What makes this instance notable is the target. Claude is one of the most commercially significant large language models in deployment, and its underlying code — or even convincingly labelled imitations of it — carries enough perceived value to lure developers, researchers, and competitors into taking the bait.
How the Malware Distribution Campaign Works
Rather than relying solely on phishing emails or compromised websites, the attackers are packaging the alleged Claude code with malicious files and posting them to platforms where technical users congregate. The approach exploits the credibility gap: someone downloading what they believe to be proprietary AI source code is unlikely to treat it with the same suspicion they might apply to an unsolicited email attachment.
Turning a high-profile AI leak into a malware delivery mechanism is a precise exploitation of developer curiosity — and it requires almost no technical sophistication from the attacker.
Anthropichas not publicly confirmed the authenticity or scope of any source code leak at the time of publication. Whether the files contain genuine Claude code, fabricated data, or a mixture of both is less relevant to the immediate threat than the fact that the malware payload is real regardless of what surrounds it.
The FBI's Wiretap Breach Adds a Separate Layer of Risk
The Claude malware campaign sits alongside two other significant incidents reported in the same Wired security roundup. The FBI has warned that a recent hack of its wiretap tools — systems used to conduct lawful surveillance under court order — poses a national security risk. The breach raises the possibility that foreign adversaries or criminal organisations now have visibility into who American law enforcement is monitoring, and potentially how.
Wiretap infrastructure is among the most sensitive data environments in any jurisdiction. A compromise at this level does not just expose individual investigations; it can burn sources, compromise ongoing operations, and reveal the technical methods used to intercept communications.
Cisco Source Code Stolen in Supply Chain Campaign
Separately, attackers stole source code belonging to Cisco as part of what Wired describes as an ongoing supply chain hacking spree. Supply chain attacks have become a defining threat vector of the current era — rather than breaching a single target directly, attackers compromise the software, tools, or code repositories that many organisations depend on, multiplying their potential reach dramatically.
Cisco's networking and security products are embedded in enterprise and government infrastructure globally. Stolen source code can enable attackers to identify undisclosed vulnerabilities, craft more convincing counterfeit software, or develop exploits that bypass security controls built around assumptions about how the code behaves.
The company has not disclosed the full scope of the theft at the time of writing, according to available reporting.
A Pattern of High-Value Code Theft
Viewed together, these three incidents — the Claude malware campaign, the FBI wiretap breach, and the Cisco source code theft — reflect a strategic shift in how sophisticated threat actors operate. Rather than targeting end-users directly, attackers are increasingly going after the infrastructure, tools, and intellectual property that underpin digital systems at scale.
For the AI sector specifically, the weaponisation of the Claude leak illustrates a threat that will grow as AI systems become more commercially valuable. Developers and researchers are, by professional disposition, inclined to examine leaked code. Attackers understand this. The human impact is direct: any individual who downloaded the malicious packages may have compromised their own machine, their employer's network, or sensitive project data.
Security researchers have long documented how social engineering attacks are most effective when they align with a target's existing motivations. A developer curious about a competitor's model architecture, or a researcher hoping to audit AI safety properties, represents the kind of motivated downloader that makes this campaign effective without requiring elaborate deception.
What This Means
Anyone in the AI development or research community who has sought out files related to a Claude source code leak should treat their device as potentially compromised and run a full security audit immediately — the malware risk is real regardless of whether the underlying code is authentic.
