Anthropic’s quiet unveiling of Claude Mythos Preview, a vulnerability-seeking artificial intelligence model the company says is too dangerous for public release, has sharpened concerns that crypto infrastructure faces a near-term, machine-speed security threat capable of inflicting sudden, irreversible losses measured in the hundreds of millions or even billions of dollars.

Technology Overview

The risk profile for Web3 is shifting from hypothetical future cryptographic failure to immediate software-layer exposure. Security executives warn that an autonomous system able to identify deep logic flaws across smart contracts, wallets, and cross-chain bridges could outpace human auditors and conventional tooling. Deddy David, chief executive of blockchain security firm Cyvers, frames the scale starkly: if AI pinpoints vulnerabilities across core internet infrastructure, crypto is among the first markets likely to feel the impact—both because of the value at stake and because transactions settle instantly and irreversibly.

According to the company, Mythos Preview has already surfaced weaknesses spanning every major web browser and operating system, including a 27-year-old bug in a critical piece of security infrastructure and multiple flaws within the Linux kernel. The UK government’s AI Security Institute (AISI) has echoed the concern, noting that its evaluation indicates the model—or successors—could be directed to autonomously compromise small, weakly defended systems if granted network access. Together, these findings underscore the core danger: AI compresses the window between vulnerability discovery and exploit development from months of specialized human work to machine timescales.

How It Works

Mythos Preview represents an offensive capability tailored for vulnerability discovery rather than superficial social engineering. Rather than generating marginally better phishing lures or isolated code snippets, the model is designed to probe for emergent weaknesses—interactions and edge cases that arise when complex components meet. In the crypto stack, this means scrutinizing upgradeable contracts for subtle state inconsistencies, tracing cross-module interactions that only fail under rare conditions, and stress-testing bridge logic that coordinates across chains.

That approach aligns uncomfortably well with how decentralized systems are built. Crypto infrastructure is largely public by design: open-source codebases, browser-based wallets, remote procedure call layers, and smart contracts that anyone can inspect. This transparency aids innovation and community review, but it also creates a vast, continuously available attack surface for autonomous models to analyze. Compounding the exposure is a structural mismatch—lean protocol teams maintain aging codebases that secure hundreds of millions in total value locked, expanding the blast radius of any overlooked flaw.

Nansen chief executive Alex Svanevik captures the difference: Mythos is already finding issues in the broader infrastructure that eluded humans and automated tools for decades. In crypto, where rapid value transfer and immutable settlement are features, not bugs, the time from identification to exploitation can be measured in seconds. As Cyvers’ David puts it, pairing AI-accelerated discovery with instant, irreversible transactions shortens the path from bug to breach to loss, turning time-to-exploit into the defining risk variable.

Industry Impact

Traditional financial networks are insulated by closed, proprietary systems, centralized controls, and circuit breakers that can halt or unwind problematic activity. By contrast, decentralized finance relies on code-governed mechanisms that execute as written, in public, with no built-in rescues once funds move. That difference matters when autonomous models reduce discovery and weaponization cycles: pre-incident prevention and continuous hardening become the only reliable defenses, while post-incident response is limited to damage containment.

Within this context, the most exposed layers are precisely those where complexity clusters: sophisticated smart contracts and the bridges that knit chains together. These components fail not because of obvious syntax errors but due to emergent behaviors—state desynchronization between upgradeable contracts or intricate inter-module interactions that standard reviews do not catch. Large-scale AI simulations can traverse these paths at a breadth and speed humans cannot match, creating an asymmetry in favor of first movers with capable models.

Despite the implications, crypto markets appear to be underpricing the advent of frontier, offense-capable AI. Investors have spent years refining a vocabulary for “Q-Day” and the possibility of broken cryptography; the quieter prospect of audit oversights, compromised wallet dependencies, and multi-step exploit chains commands less attention, even as it maps to risks materializing in today’s software layer.

The Defensive Race

Recognizing the urgency, Anthropic has launched Project Glasswing with partners including Amazon Web Services, Google, Microsoft, and JPMorgan Chase to deploy Mythos Preview against critical systems defensively. The company is allocating up to $100 million in usage credits to patch weaknesses before malicious actors develop similar capabilities. The concern has reached policymakers as well; Federal Reserve Chairman Jerome Powell and Treasury Secretary Scott Bessent recently convened a surprise meeting with leading US bank chief executives to examine the systemic risks posed by models like Mythos.

Crypto platforms are moving to secure a seat inside this defensive perimeter. Major exchanges such as Coinbase and Binance are in close contact with Anthropic to seek early access to the model. Decentralized players are signaling similar urgency: Uniswap founder Hayden Adams has publicly requested an opportunity to test Mythos Preview against the protocol, which secures more than $3 billion in assets. Proponents argue that, in the right hands, the toolset could become the most effective auditing capability yet developed for smart contracts, shifting security from periodic, manual reviews to continuous, AI-assisted analysis.

In parallel, OpenAI has expanded availability of a more cyber-permissive model, GPT-5.4-Cyber, through its Trusted Access for Cyber program, enabling vetted security vendors to stress-test their own systems. Together, these moves suggest the industry is entering an era where defense must operate at the same machine speed as offense.

Future Implications

Even as AI elevates near-term software risk, quantum computing remains the longer-horizon threat to cryptographic foundations. Google Research has warned that future machines may break the elliptic-curve cryptography used across crypto with fewer resources than previously estimated, and that a sufficiently powerful cryptanalytically relevant quantum computer could derive private keys from public keys within minutes. With Bitcoin trading near $70,000, the asset class represents a multi-trillion-dollar bounty, and estimates indicate that as much as 37% of circulating Bitcoin could be vulnerable before the network confirms the transaction.

Google’s public guidance, however, emphasizes preparation and migration, with a stated 2029 target for a full transition to post-quantum cryptography. That timeline highlights the industry’s two-track dilemma. Anthropic’s model demonstrates that software exploits are a “now” problem, unfolding at machine speed across the open infrastructure that underpins Web3. Quantum remains the more existential threat to the math itself, but its impact hinges on whether the ecosystem completes timely migration to post-quantum standards.

As Chris Smith of Quantus notes, AI models excel at finding and localizing software bugs, while quantum computing threatens the algorithms on which digital assets depend. If the mathematics breaks, perfect software offers no protection. Until then, the most immediate challenge is operational: hardening public, composable systems against a class of AI that can mine them for exploitable interactions faster than humans can respond. Whether defenders gain and retain access to these capabilities—and adopt them proactively rather than reactively—may determine how much value the industry can keep out of harm’s way.