According to an update from the Google Threat Intelligence Group (GTIG), this year has seen a shift where bad actors aren’t just leveraging artificial intelligence (AI) for productivity gains, but are now deploying novel AI-enabled malware in active operations.
The findings from GTIG are an update to its January 2012 analysis, “Adversarial Misuse of Generative AI,” which detailed how government-backed threat actors and cybercriminals are now integrating and experimenting with AI across the industry, including throughout the entirety of an attack’s lifecycle.
GTIG identified that malware families, including PROMPTFLUX and PROMPTSTEAL, are now employing Large Language Models (LLMs) during execution.
“These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware. While still nascent, this represents a significant step toward more autonomous and adaptive malware,” GTIG warned.
In addition, threat actors have used social engineering-esque pretexts to bypass AI guardrails, and the use of illicit AI tools also matured this year. Those tools were designed to support phishing, malware development, and even vulnerability research, which could significantly lower the barrier to entry for less sophisticated threat actors.
GTIG also noted that “state-sponsored actors, including those from North Korea, Iran, and the People’s Republic of China (PRC), continue to misuse Gemini to enhance all stages of their operations, from reconnaissance and phishing lure creation to command and control (C2) development and data exfiltration.”
AI Dangers Growing
Even as Google has called for AI development that maximizes the positive benefits of the technology for society while addressing its challenges, experts warn that bad actors are gaining the upper hand.
“Free from ethical, regulatory, or corporate constraints, they can fully exploit the most advanced AI technologies. Meanwhile, defenders operate within strict boundaries of governance, privacy, and compliance that limit their ability to innovate and respond at the same speed,” explained Nick Mo, CEO & co-founder of Ridge Security Technology Inc.
Mo told ClearanceJobs via email that this asymmetry is widening the gap in agility and sophistication.
“The only path forward is clear: defenders must fight AI with AI and develop intelligent, autonomous systems that can learn, adapt, validate, and counter threats at machine speed,” Mo added. “This is no longer just an arms race. It’s a paradigm shift that will define the next era of cybersecurity.”
This also builds on past cyberattacks, with AI just lowering the barrier to entry.
“This isn’t surprising. It confirms what we’re already seeing in SaaS attack campaigns as well,” explained Cory Michal, CSO at AppOmni.
The current trend is seeing threat actors leveraging AI to make their operations more efficient and sophisticated, just as legitimate teams use AI to improve productivity.
“We’ve observed attackers using AI to automatically generate data extraction code, reconnaissance scripts, and even adversary-in-the-middle toolkits that adapt to defense,” Michal said in an email to ClearanceJobs. “They’re essentially ‘vibe-hacking’ using generative AI to better mimic authentic behavior, refine social engineering lures, and accelerate the technical aspects of intrusion and exploitation.”
Time to Catch Up?
Even as this may be a era in hacking and cyber attacks, it doesn’t mean it is too late.
“Google caught this while it’s still experimental, but the bad news is that once this capability matures, traditional security tools that rely solely on pattern matching will be almost useless except to defend against basic script kiddies,” Michael Bell, founder & CEO at Suzu Labs, also told ClearanceJobs.
He said this should be seen as another reminder of the importance of building security testing methodologies that assume AI-powered threats from day one.
“The underground marketplace for ‘AI tools purpose-built for criminal behavior’ isn’t coming in the future; it’s already here, and most enterprises aren’t remotely prepared for what happens when attackers have the same AI capabilities defenders do,” added Bell.
Moreover, AI-enabled malware may also mutate its code, making traditional signature-based detection ineffective.
“Defenders need behavioral EDR (Endpoint Detection and Response) that focuses on what malware does, not what it looks like,” said Michal. “Detection should key in on unusual process creation, scripting activity, or unexpected outbound traffic, especially to AI APIs like Gemini, Hugging Face, or OpenAI. By correlating behavioral signals across endpoint, SaaS, and identity telemetry, organizations can spot when attackers are abusing AI and stop them before data is exfiltrated.”


