TL;DR:
- a16z warns that AI brokers can already reproduce exploits in DeFi protocols, with success charges near 70% in easy assaults.
- The agency argues that the standard mannequin of point-in-time audits is inadequate and proposes safety based mostly on formal specs and invariants.
- Composability between protocols amplifies the issue: an exploit detected by AI in a single contract can set off systemic failures throughout all the community.
a16z crypto revealed a analysis paper that exposes a safety downside in DeFi: synthetic intelligence brokers now not merely help in defending protocols — they are able to autonomously figuring out and reproducing value manipulation vulnerabilities.
Preliminary outcomes point out success charges near 70% when brokers had entry to identified exploit paths and structured information, although they nonetheless present limitations in complicated multi-step assaults.

The Audit Mannequin Is No Longer Sufficient
For years, safety in DeFi adopted a predictable sample: protocols launched code, commissioned audits, patched detected points, and trusted that the evaluation was enough. That mannequin already seemed fragile when human attackers outpaced audit cycles. AI brokers widened that hole considerably.
A system able to constantly testing exploit paths doesn’t await the following scheduled evaluation. It retains looking out. That’s the reason a16z argues that the DeFi ecosystem should abandon the “code is legislation” logic and transfer towards safety based mostly on formal specs: proving what a protocol is allowed to do, moderately than reacting solely after an assault has already occurred.


a16z: The Asymmetry Favors the Attacker
What makes AI notably harmful is its scale. An agent doesn’t want creativity within the human sense: it wants repetition and sufficient reasoning capability to check assumptions sooner than defenders can reply. If it might simulate hundreds of exploit paths throughout lending swimming pools, oracles, bridge logic, and liquidation mechanics, the attacker solely wants one to work. The defender should shield all of them.
In keeping with a16z, composability additionally worsens the outlook. A vulnerability in an remoted contract is harmful. In a bridge or a cross-chain collateral construction, it might develop into systemic. AI brokers don’t distinguish between “core” and “peripheral” failures: they consider whether or not the system’s assumptions break down, they usually achieve this at machine pace.
The a16z analysis additionally notes that, traditionally, the assault arrives earlier than the protection. Attackers experiment without having governance approval or inside consensus. They solely want one opening. In keeping with preliminary studies, AI brokers present larger effectiveness exploiting vulnerabilities than safely remediating them. Detection is easier than protected remediation. That ought to unsettle each DeFi protocol working immediately.
