Google says criminals used AI-built zero-day in planned mass hack spree

AI + ML

GTIG says AI-powered hacking has moved well beyond phishing emails and chatbot tricks

Google says crooks already have AI cooking up zero-days, and claims one nearly escaped into the wild before the company stopped it.

In a report shared with The Register ahead of publication on Monday, Google’s Threat Intelligence Group said that it has identified what it believes is the first real-world case of cyber-baddies using AI to discover and weaponize a zero-day vulnerability in a planned mass-exploitation campaign. 

The bug, a two-factor authentication bypass in a popular open source web-based administration platform, was reportedly developed by criminals working together on a large-scale intrusion operation.

GTIG said that the attackers appear to have used an AI model to both identify the flaw and help turn it into a usable exploit. Google worked with the unnamed vendor to quietly patch the issue before the campaign could properly kick off, which it believes may have disrupted the operation before it gained traction.

The company insists that neither Gemini nor Anthropic’s Mythos was involved, but said that the exploit itself looked suspiciously machine-made. According to the report, the Python script included what Google described as “educational docstrings,” a hallucinated CVSS score, and a polished textbook coding structure that looked heavily influenced by LLM training data.

Google said that the issue stemmed from developers hard-coding a trust exception into the authentication flow, creating a hole that attackers could exploit to sidestep 2FA checks. According to the firm, those higher-level logic mistakes are exactly the kind of thing modern AI models are starting to get surprisingly good at finding.

“While fuzzers and static analysis tools are optimized to detect sinks and crashes, frontier LLMs excel at identifying these types of high-level flaws and hardcoded static anomalies,” the report said.

John Hultquist, chief analyst at Google Threat Intelligence Group, said anyone still treating AI-assisted vulnerability discovery as a future problem is already behind.

“There’s a misconception that the AI vulnerability race is imminent. The reality is that it’s already begun. For every zero-day we can trace back to AI, there are probably many more out there,” Hultquist said.

“Threat actors are using AI to boost the speed, scale, and sophistication of their attacks. It enables them to test their operations, persist against targets, build better malware, and make many other improvements. State actors are taking advantage of this technology but the criminal threat shouldn’t be underestimated, especially given their history of broad, aggressive attacks.”

Google’s report suggests that the zero-day case is part of something much bigger. GTIG said North Korean crew APT45 had been using AI to churn through thousands of exploit checks and bulk out its toolkit, while Chinese state-linked operators were experimenting with AI systems for vulnerability hunting and automated probing of targets.

Google also described malware families padded out with AI-generated junk code designed to confuse analysts, Android backdoors using Gemini APIs to autonomously navigate infected devices, and Russian influence operations stitching fabricated AI-generated audio into legitimate news footage.

The awkward bit for everyone else is that this still appears to be the clumsy early phase. Google said mistakes in the exploit’s implementation probably interfered with the criminals’ plans this time around, but that may not stay true for long. ®

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *