Anthropic Is Scaling Like Crazy - But Its Own AI Might Be the Biggest Threat It Has Created

Anthropic triples revenue to $30B and signs multi-gigawatt compute deal with Google, while Claude Mythos leaks and raises cybersecurity alarms across the industry.

Anthropic Is Scaling Like Crazy - But Its Own AI Might Be the Biggest Threat It Has Created

Anthropic had one of the most consequential weeks in AI history. Here's what it means.

Last week was supposed to be a victory lap for Anthropic. The company announced a massive compute partnership with Google and Broadcom, revealed that its revenue run rate has tripled to $30 billion, and confirmed over 1,000 enterprise customers now each spend more than $1 million annually on Claude.

Instead, the conversation was hijacked by Claude Mythos - Anthropic's most powerful AI model that accidentally leaked online and immediately raised alarms across the cybersecurity community.

The Compute Deal That Changes Everything

Let's start with the headline numbers, because they're staggering. Anthropic signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, expected to come online starting in 2027. To put that in perspective, this represents the single largest compute commitment in Anthropic's history.

According to the official announcement, Anthropic's revenue run rate has surged from approximately $9 billion at the end of 2025 to over $30 billion today. The number of enterprise customers spending seven figures annually has doubled from 500 to over 1,000 in less than two months. CFO Krishna Rao described this as "unprecedented growth" requiring the company's "most significant compute commitment to date."

What's particularly interesting is Anthropic's hardware-agnostic strategy. The company trains and runs Claude across AWS Trainium, Google TPUs, and NVIDIA GPUs, deliberately avoiding single-vendor lock-in. This diversification is becoming a competitive moat - while competitors like OpenAI remain tightly coupled to Microsoft's infrastructure, Anthropic can route workloads to whichever platform offers the best price-performance ratio at any given moment.

The Mythos Problem Nobody Wanted

Then came Claude Mythos. According to a CNN report, Anthropic's latest model - designed to identify security vulnerabilities in software - leaked through an internal content management error before its controlled release.

Anthropic's own internal assessment, which was inadvertently published, described Mythos as "far surpassing other AI models in cyber capabilities" and capable of exploiting vulnerabilities "at unprecedented speed." The company had reportedly been quietly warning U.S. government officials about the potential for large-scale AI-powered cyberattacks.

The cybersecurity community's reaction was swift and divided. Some experts called it a "watershed moment" - a single AI agent that can scan for and exploit vulnerabilities faster than hundreds of human hackers. Others pointed out that the same capabilities that make Mythos dangerous in the wrong hands are exactly what enterprises need to defend their own systems.

Anthropic's response was characteristically measured. The company limited Mythos' rollout, implemented additional safety guardrails, and urged organizations to "put up defenses first" before the model becomes more widely available. It's the kind of nuanced position that has become Anthropic's trademark - acknowledging the reality of dual-use AI capabilities while trying to responsibly manage the timeline.

The Bigger Picture

What connects Anthropic's $30 billion revenue milestone, its multi-gigawatt compute deal, and the Mythos controversy is a single thread: AI capabilities are scaling faster than our ability to govern them.

The company is simultaneously building the infrastructure to train increasingly powerful models while acknowledging - in its own research and public statements - that each generation of AI brings new risks that existing frameworks aren't designed to handle. Anthropic's acquisition of Coefficient Bio for $400 million, as reported by The Information, further extends Claude's reach into healthcare and biotechnology, adding another domain where AI safety isn't just a technical problem but a life-or-death one.

For enterprises watching the Anthropic story unfold, the takeaway is clear: the AI safety conversation isn't slowing down the technology. It's being built alongside it, sometimes messily, often controversially, but undeniably at scale. The question isn't whether models like Claude Mythos will reshape industries - it's whether the guardrails being built around them will prove adequate.

Anthropic is betting $30 billion in revenue and gigawatts of compute that the answer is yes. The next few months will tell us if that bet is wise or dangerously optimistic.


Sources: Anthropic Official, CNN, The Information, CNBC