Google: AI-generated zero-day bypassed 2FA
Google’s Threat Intelligence Group confirmed an AI-generated Python exploit bypassed two-factor authentication by exploiting a hardcoded trust flaw in a common web admin tool.
On May 11, 2026, Google’s Threat Intelligence Group confirmed what it describes as the first known case of a zero-day exploit produced with artificial intelligence. The exploit was a Python script that bypassed two-factor authentication by taking advantage of a hardcoded trust flaw in a widely used open-source web administration tool.
GTIG’s report states the script targeted a logic error in how the tool decided which authentication requests to trust, allowing it to circumvent 2FA protections. Researchers identified multiple markers they associate with AI-generated code, including clean ANSI color classes, organized instructional prompts, a fabricated CVSS severity score and detailed help menus.
Google’s analysis excluded the company’s Gemini model, indicating the actors used a different large language model to find the vulnerability and construct a working bypass.
GTIG determined the exploit was part of a planned mass exploitation effort. Google contacted the software vendor and coordinated a patch before the campaign reached wide distribution. The timeline in GTIG’s findings shows the activity was detected early in the exploitation cycle and that the patch was applied prior to large-scale rollout.
The report notes that many cryptocurrency exchanges, wallet providers and custodial services use two-factor authentication and deploy open-source web administration tools. GTIG did not link any specific crypto platforms to this exploit.
Security professionals recommend additional controls that are not affected by a 2FA bypass, such as hardware security keys, withdrawal address whitelists and multi-signature wallet configurations. Exchanges and custodians that rely primarily on software-based two-factor methods are being urged to review their threat models and adopt layered defenses.
GTIG warned AI-assisted exploit generation could be applied to other targets, including smart contracts, browser extension wallets and API authentication systems used by trading platforms. The report notes future AI-generated exploits may lack the same fingerprints and could target systems without monitoring by a major security team.




