Malicious Chrome Extensions and AI Prompt Injection Abuse Trusted User Workflows
- Yisda Technical Team

- Jan 22
- 3 min read
When attackers exploit browser sessions or user workflows, tighter access controls and stronger network segmentation can help reduce the blast radius if an account session is hijacked or an endpoint is compromised.
Three separate reports describe attackers abusing trusted, everyday surfaces: browser extensions that present themselves as familiar business tools, a prompt injection technique that uses calendar invites to turn an AI assistant into an unintended data extraction path, and a crash style lure that pressures users into running commands that lead to a remote access trojan. The shared theme is not a single new vulnerability class, but a practical shift in how trust is borrowed and reused across sessions, automated assistants, and user initiated “fix” workflows. For executives, the takeaway is to treat browsers, identity sessions, and AI connected workflow features as core security boundaries, with the same rigor you apply to endpoints and networks.

Malicious Chrome Extensions Steal Authentication Data and Disrupt Security Controls
Security researchers reported a cluster of five Google Chrome extensions designed to appear as helpful tools for popular platforms such as Workday, NetSuite, and SuccessFactors, but instead perform harmful functions. These extensions collect users’ authentication cookies and send them to attacker controlled servers, manipulate browser pages to block access to security and administrative interfaces, and ultimately allow attackers to hijack sessions. Analysts noted that most of the extensions have been removed from the official Chrome Web Store, but may still be available on outside download sites, and that the shared code behavior suggests the operation is coordinated. Access the full article here.
Prompt Injection Flaw in Google Gemini Enables Calendar Data Leaks
Cybersecurity researchers disclosed a vulnerability that takes advantage of indirect prompt injection to misuse Google’s Gemini assistant in combination with Google Calendar, allowing attackers to extract private meeting details. In this attack, a crafted calendar event description embeds a prompt that remains dormant until an innocent schedule query activates it. When triggered, the assistant creates a new calendar entry containing a summary of the target’s meetings, which can then be read by the attacker without direct user interaction. The report indicates the issue was addressed through responsible disclosure, and highlights how natural language interfaces can be manipulated to bypass intended safeguards. Access the full article here.

CrashFix Campaign Uses Fake Browser Crash Flow to Deliver ModeloRAT
Researchers described an ongoing malware campaign that uses a deceptive Chrome extension posing as an ad blocker to deliberately crash the browser and prompt users to run commands that lead to the deployment of a remote access trojan named ModeloRAT. The extension, distributed through what appeared to be the official Web Store, was engineered to create a denial of service within the browser and then display bogus alerts telling users to execute a command in the Windows Run dialog. The resulting payload execution chain ultimately delivers ModeloRAT, which can persist, communicate with its operators, and execute additional commands on compromised systems. Access the full article here.
Yisda Takeaways
The events in this week’s newsletter illustrates how attackers take advantage of user trust in ordinary interfaces, including browser add-ons, AI assistants that respond to natural language, and on-screen prompts that appear after a software interruption. One report documented Chrome extensions that presented themselves as useful enterprise tools but instead collected authentication cookies and interfered with access to administrative pages, which shows that controlling extension permissions and visibility should be treated as part of protecting identity and enterprise sessions. Another report outlined how a crafted calendar invite triggered an indirect prompt injection that caused an AI assistant to reveal private scheduling details, demonstrating that natural language features can act as input channels that require validation and security testing. A third report described attackers forcing a browser into a crash state and encouraging users to follow a supposed “fix” flow that ultimately delivered a remote access tool, underscoring that social engineering can operate through induced application failure rather than only phishing messages. Taken together, these incidents support strengthening authentication boundaries, evaluating automated interfaces for unintended behaviors, and applying segmentation and access controls so that compromise of one browser session or endpoint does not grant broad movement across an environment.



Comments