On November 13, 2025, Anthropic, the developer of an artificial intelligence model (“AI”) known as Claude, announced that it had detected and helped disrupt what it believes to be the first cyber espionage campaign orchestrated primarily by autonomous AI agents. Anthropic stated that it had “high confidence” that the campaign was orchestrated by a state-sponsored group, and described the campaign as a “significant escalation” in the evolution of cybersecurity threats. Like the artificial intelligence in William Gibson’s Neuromancer, AI technology is now able to automate and assist complex attacks on a large scale, and lowers the barrier to sophisticated hacking of computer systems. The incident is a reminder of the risks to both the developers of these technologies, and the businesses and individuals whose data may be at risk from malicious use of AI.
Blog Post
The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.