Hackers Used AI to Build a Zero-Day Exploit That Bypasses Two-Factor Authentication: Google

In brief

  • Google’s Threat Intelligence Group confirmed that cybercriminals used AI to develop a zero-day exploit targeting a popular open-source web administration tool.
  • Google said this is the first time the company has identified AI-assisted zero-day development in the wild.
  • Google worked with the affected vendor to patch the vulnerability before the campaign scaled, but said threat actors linked to China and North Korea are also actively using AI for vulnerability research and exploit development.

Cybercriminals used an AI model to discover and weaponize a zero-day vulnerability in a popular open-source web administration tool, according to Google’s Threat Intelligence Group.

In a report published Monday, Google said the flaw let attackers bypass two-factor authentication, and warned that the attackers were preparing a mass exploitation campaign before the company intervened. It is the first time Google has confirmed AI-assisted zero-day development in the wild.

“As the coding capabilities of AI models advance, we continue to observe adversaries increasingly leverage these tools as expert-level force multipliers for vulnerability research and exploit development, including for zero-day vulnerabilities,” Google wrote. “While these tools empower defensive research, they also lower the barrier for adversaries to reverse-engineer applications and develop sophisticated, AI-generated exploits.

The report comes as researchers and governments warn that AI models are accelerating cyberattacks by helping hackers find vulnerabilities, generate malware, and automate exploit development.

“Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning, effectively reading the developer’s intent to correlate the 2FA enforcement logic with the contradictions of its hardcoded exceptions,” the report said. “This capability can allow models to surface dormant logic errors that appear functionally correct to traditional scanners but are strategically broken from a security perspective.”

According to Google, the unnamed attackers used AI to identify a logic flaw where the software trusted a condition that bypassed its two-factor authentication protections. Unlike traditional scanners that search for broken code or crashes, the AI analyzed how the software was intended to work and detected the contradiction, allowing attackers to bypass the security check without breaking the encryption itself.

“AI-driven coding has accelerated the development of infrastructure suites and polymorphic malware by adversaries,” Google wrote. “These AI-enabled development cycles facilitate defense evasion by enabling the creation of obfuscation networks and the integration of AI-generated decoy logic in malware that we have linked to suspected Russia-nexus threat actors.”

The report says that threat actors from China and North Korea are using AI to find software weaknesses, while Russian groups are using it to hide their malware.

“These actors have leveraged sophisticated approaches toward AI-augmented vulnerability discovery and exploitation, beginning with persona-driven jailbreaking attempts and the integration of specialized, high-fidelity security datasets to augment their vulnerability discovery and exploitation workflows,” Google wrote.

While Google’s report aimed to warn about the growing risk of AI-powered cyberattacks, some researchers argue that the fear is overblown. A separate study led by Cambridge University of over 90,000 cybercrime forum threads found that most criminals were using AI for spam and phishing rather than vibe coding sophisticated cyberattacks.

“The role of jailbroken LLMs (Dark AI) as instructors is also overstated, given the prominence of subculture and social learning in initiation – new users value the social connections and community identity involved in learning hacking and cybercrime skills as much as the knowledge itself,” the study said. “Our initial results, therefore, suggest that even bemoaning the rise of the Vibercriminal may be overstating the level of disruption to date.”

Despite Cambridge’s findings, however, the Threat Intelligence Group’s report also comes as Google has faced security concerns tied to AI-powered tools. In April, the company patched a prompt injection flaw in its Antigravity AI coding platform that researchers said could let attackers execute commands on a developer’s machine through manipulated prompts.

“Although we do not believe Gemini was used, based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability,” Google researchers wrote.

Earlier this year, Anthropic restricted access to its Claude Mythos model after tests showed it could identify thousands of previously unknown software flaws. The findings also add to growing concerns that AI models are reshaping cybersecurity by helping both defenders and attackers find vulnerabilities faster.

“As these capabilities reach the hands of more defenders, many other teams are now experiencing the same vertigo we did when the findings first came into focus,” Mozilla wrote in a blog post in April. “For a hardened target, just one such bug would have been red-alert in 2025, and so many at once makes you stop to wonder whether it’s even possible to keep up.”

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *