On November 13, 2025, Anthropic, the developer of an artificial intelligence model (“AI”) known as Claude, announced that it had detected and helped disrupt what it believes to be the first cyber espionage campaign orchestrated primarily by autonomous AI agents. Anthropic stated that it had “high confidence” that the campaign was orchestrated by a state-sponsored group, and described the campaign as a “significant escalation” in the evolution of cybersecurity threats. Like the artificial intelligence in William Gibson’s Neuromancer, AI technology is now able to automate and assist complex attacks on a large scale, and lowers the barrier to sophisticated hacking of computer systems. The incident is a reminder of the risks to both the developers of these technologies, and the businesses and individuals whose data may be at risk from malicious use of AI.
Blog Post