Google revealed Monday that it disrupted a cyberattack in which hackers used artificial intelligence to exploit an unknown vulnerability, marking a significant escalation in the use of AI for malicious purposes. The tech giant’s threat intelligence arm, led by Chief Analyst John Hultquist, confirmed that AI-driven cyberattacks are now a reality. "It’s here," Hultquist stated. "The era of AI-driven vulnerability and exploitation is already here."
AI-Powered Exploitation
The attackers targeted a popular online system administration tool using a zero-day exploit, bypassing two-factor authentication. Google traced the hackers’ use of an AI large language model to discover the vulnerability, though the specific model was not disclosed. The company assured that its own AI models, Gemini and Anthropic’s Claude Mythos, were not involved. While Google did not name the suspected group, it confirmed there was no evidence of state-sponsored activity, though groups linked to China and North Korea have been exploring similar techniques.
"AI is going to be a huge advantage because they can move a lot faster," Hultquist warned, emphasizing the speed at which AI can weaponize vulnerabilities.
Regulatory Uncertainty
The incident comes amid debates over government oversight of AI. The Trump administration recently repealed Biden-era regulations on AI development but signed agreements with major tech firms, including Google and Microsoft, to evaluate powerful AI models before release. However, these agreements were quickly removed from the Commerce Department website, reflecting mixed signals on AI regulation. Dean Ball, a senior fellow at the Foundation for American Innovation, acknowledged the need for regulation despite personal reservations. "I don’t like regulation," Ball said. "But I think we need to in this case."
The rise of AI-driven cyber threats underscores the urgency for robust cybersecurity measures and regulatory clarity to protect American digital infrastructure and national security.