Close Menu
  • Home
  • Cybercrime and Ransomware
  • Emerging Tech
  • Threat Intelligence
  • Expert Insights
  • Careers and Learning
  • Compliance

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

What's Hot

Shielding Your Future: Top 10 Risks in Copilot Studio Security

February 16, 2026

Operation DoppelBrand: Harnessing Fortune 500 Power

February 16, 2026

LockBit 5.0 Targets Windows, Linux, and ESXi Systems

February 16, 2026
Facebook X (Twitter) Instagram
The CISO Brief
  • Home
  • Cybercrime and Ransomware
  • Emerging Tech
  • Threat Intelligence
  • Expert Insights
  • Careers and Learning
  • Compliance
Home » Navigating Toxic Flows in Agentic AI: What You Need to Know
Uncategorized

Navigating Toxic Flows in Agentic AI: What You Need to Know

Staff WriterBy Staff WriterSeptember 5, 2025No Comments8 Mins Read0 Views
Facebook Twitter Pinterest LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email

Fast Facts

  1. Agentic AI Opportunities and Risks: CEOs are increasingly adopting agentic AI for efficiency gains, but security researchers warn of significant cyber resilience risks and functional failures associated with these deployments.

  2. Toxic Flows: New risks called "toxic flows" arise when AI agents interface with sensitive systems, characterized by untrusted inputs and excessive permissions, potentially leading to severe data breaches.

  3. Lethal Trifecta: Combining access to private data, exposure to untrusted content, and external communication creates the "lethal trifecta," enabling attackers to exploit vulnerabilities and exfiltrate sensitive information.

  4. Toxic Flow Analysis: A proposed framework for identifying toxic flows helps organizations mitigate risks by modeling data flow and usage sequences within AI systems, aiming for better security controls.

[gptAs a technology journalist, write a short news story divided in two subheadings, at 12th grade reading level about ‘Anyone Using Agentic AI Needs to Understand Toxic Flows’in short sentences using transition words, in an informative and explanatory tone, from the perspective of an insightful Tech News Editor, ensure clarity, consistency, and accessibility. Use concise, factual language and avoid jargon that may confuse readers. Maintain a neutral yet engaging tone to provide balanced perspectives on practicality, possible widespread adoption, and contribution to the human journey. Avoid passive voice. The article should provide relatable insights based on the following information ‘

Today’s business elite is breathless for agentic AI possibilities, as CEOs grasp AI as an efficiency lifeline. Risks of functional failures aside — and they’re most surely a big elephant in the room — security researchers are concerned about the emerging cyber resilience risks that all of these agentic deployments add to the risk register.

Toxic flows are one of the emerging classes of agentic AI risks that researchers say need to be on the radars of executives, engineers, and security people alike. Flows between AI agents, IT tools, and enterprise software are beset by a risky combination of exposure to untrusted input, excessive permissions, access to sensitive data, and external connections that can be used by attackers to steal data.

These toxic flows could be at the heart of what many believe will be the path to better agentic AI risk management, provided the industry can implement controls for them.

How Agentic AI Risks are Different

In many ways, AI systems are similar to any other software: vulnerable to flaws in code, misconfigurations, and broken authentication. Agentic AI adds a new wrinkle to the tapestry of threats posed by the modern software stack, says Luca Beurer-Kellner, co-founder of Invariant Labs. The nondeterministic nature of agentic behavior makes it really hard to predict risky behavior in advance.

Related:Incode Acquires AuthenticID to Enhance AI-Driven Identity Verification

“The whole premise of agentic AI systems is that they can do things for you without the developers having to anticipate them. That’s an amazing property and makes it promising, but it’s hard to anticipate ahead of time what kinds of risks we are exposing ourselves to,” says Beurer-Kellner. “That’s different from traditional software, because it is typically code and algorithms and processes that are well known ahead of time.” 

He has been heading up research efforts to drive awareness around toxic flows by Snyk, which recently acquired Invariant.

When AI agents with inherently unpredictable behavior are connected to some of the most sensitive systems in the enterprise, be they customer databases, financial systems, or development platforms, big issues start to arise. This is the role of model context protocol (MCP) servers, connectors that help developers sync up data sources with generative AI-powered tools. They are being called the “USB-C port” for AI apps because they make it possible for applications to communicate seamlessly with data sources and other tools.

Make no mistake, MCP is going to have to connect a lot of sensitive systems to AI agents, as developers march to CEO orders to make accounting business functions more efficient through agentic AI. This is what it will take to power the most valuable use cases, but it also drastically increases the risks of prompt injections, hallucinations, and other exploitable flaws in LLMs.

Related:New Risk Index Helps Organizations Tackle Cloud Security Chaos

“Whenever there’s a slip-up, whenever there’s a hallucination, whenever there’s an attacker, the consequences are much more severe,” Beurer-Kellner says. “It’s no longer just happening in a chat window and just a funny hallucination. It could be like an extra zero on a bank transaction. These are the kind of mistakes you don’t want to happen.”

Given the business mandates, though, security professionals can’t say no to agentic AI connecting to sensitive systems — but they can start to help the business structure connections to control risk.

The Lethal Trifecta and AI Kill Chain

Some of the riskiest agentic AI deployments occur when systems are combined in a way to invoke what software engineering luminary Simon Willison recently called “the lethal trifecta for AI agents.”

This is when AI agents are designed to combine access to private data, exposure to untrusted content and the ability to communicate externally in a way that can be used to steal the data.

Related:AI Driving the Adoption of Confidential Computing

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to that attacker,” Willison wrote in his blog.

Unfortunately, all too many AI agents today are prone to the lethal trifecta. In recent research on toxic flow analysis, Beurer-Kellner and his Invariant Labs team pointed to the trifecta as a prime breeding ground for toxic flows. They demonstrated the recent GitHub MCP exploit as a classic example of such a flow in action, showing how attackers could attack fully trusted tools by using untrusted information to exfiltrate data.  

And this is no isolated research. Security researcher Johan Rehberger demonstrated that no popular AI tool or agent is immune to a plethora of issues through his Month of AI Bugs vulnerability publishing blitz in August. Rehberger dropped dozens of consequential vulnerabilities across just about every major platform, plenty of them made possible by the lethal trifecta. Over the course of the month, he also put forward his own deadly trio of agentic AI problems, one he calls the AI Kill Chain.

Several of his discovered vulnerabilities were exploitable via a three-step process: Prompt injection, confused deputy problems and automatic tool invocation.

The connection between the AI Kill Chain and the Lethal Trifecta shows that many of agentic AI’s exploits will rest on the ability to pick apart the fabric that weaves together agentic prompts and connections with sensitive data.

“Giving an AI agent access to private data isn’t the risky part. It’s what you combine it with,” summed up KPMG’s agentic AI Agent Engineering Leader Justin O’Connor.

Toxic Flow Analysis Enters the Chat

To any security veteran, toxic combinations shouldn’t be a new concept. It’s a longstanding issue in identity management, which has had to develop controls to prevent problematic access combinations such as finance users creating new vendors and approving payments at the same time or IT admins managing user access and also deleting system logs.

Toxic flows in agentic AI are often also tied up in privilege weaknesses, but they take dangerous mash-ups to new heights of risk and complexity.

Beurer-Kellner at Snyk’s Invariant Labs hopes that his team can help organizations start to surface these issues through a framework they’ve developed, designed to analyze AI-powered apps for potential toxic flows. Toxic Flow Analysis is now being delivered through Snyk’s open source MCP scan tool.

The analysis models the flow of data and tool usage within an agent system to look for toxic combinations. This is different from prompt security solutions, which look solely at the secure implementation of agent systems. The idea behind toxic flow analysis is to create a flow graph of an agent system and model all of the potential sequences of tool uses, together with other properties like the level of trust, the sensitivity of the system or data it handles, and whether the tool could be used as an exfiltration sink.

“The key word in ‘toxic flow analysis’ is actually ‘flow,'” says Danny Allan, CTO of Snyk. “If you don’t understand the flow by definition, you’re not going to get things like authorization right. Because the insecurities happen at the boundaries of these different components that may have security built into them and their own components, but not across the components.”

‘. Do not end the article by saying In Conclusion or In Summary. Do not include names or provide a placeholder of authors or source. Make Sure the subheadings are in between html tags of

[/gpt3]

Continue Your Tech Journey

Explore the future of technology with our detailed insights on Artificial Intelligence.

Stay inspired by the vast knowledge available on Wikipedia.

CyberRisk-V1

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleWealthsimple Data Breach Threatens User Privacy
Next Article Critical SAP S/4HANA Vulnerability Under Siege!
Avatar photo
Staff Writer
  • Website

John Marcelli is a staff writer for the CISO Brief, with a passion for exploring and writing about the ever-evolving world of technology. From emerging trends to in-depth reviews of the latest gadgets, John stays at the forefront of innovation, delivering engaging content that informs and inspires readers. When he's not writing, he enjoys experimenting with new tech tools and diving into the digital landscape.

Related Posts

RiskRubric.ai Unveils Groundbreaking AI Model Risk Leaderboard

September 19, 2025

RegScale Secures $30M+ to Transform Cyber GRC

September 19, 2025

Cybersecurity Leaders Brace for Surge in Nation-State Attacks by 2025

September 19, 2025

Comments are closed.

Latest Posts

LockBit 5.0 Targets Windows, Linux, and ESXi Systems

February 16, 2026

Bridging the Gap: Connecting Through Shared Risk Understanding

February 16, 2026

Decade Later: Bangladesh Bank Cyberheist Sparks New Cyber-Resiliency Lessons

February 16, 2026

Threat Actor Offers Critical OpenSea 0-Day Exploit Chain on Hacking Forums

February 14, 2026
Don't Miss

RiskRubric.ai Unveils Groundbreaking AI Model Risk Leaderboard

By Staff WriterSeptember 19, 2025

Essential Insights Launch of RiskRubric.ai: The Cloud Security Alliance and partners have introduced RiskRubric.ai, the…

RegScale Secures $30M+ to Transform Cyber GRC

September 19, 2025

Cybersecurity Leaders Brace for Surge in Nation-State Attacks by 2025

September 19, 2025

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Shielding Your Future: Top 10 Risks in Copilot Studio Security
  • Operation DoppelBrand: Harnessing Fortune 500 Power
  • LockBit 5.0 Targets Windows, Linux, and ESXi Systems
  • Bridging the Gap: Connecting Through Shared Risk Understanding
  • Decade Later: Bangladesh Bank Cyberheist Sparks New Cyber-Resiliency Lessons
About Us
About Us

Welcome to The CISO Brief, your trusted source for the latest news, expert insights, and developments in the cybersecurity world.

In today’s rapidly evolving digital landscape, staying informed about cyber threats, innovations, and industry trends is critical for professionals and organizations alike. At The CISO Brief, we are committed to providing timely, accurate, and insightful content that helps security leaders navigate the complexities of cybersecurity.

Facebook X (Twitter) Pinterest YouTube WhatsApp
Our Picks

Shielding Your Future: Top 10 Risks in Copilot Studio Security

February 16, 2026

Operation DoppelBrand: Harnessing Fortune 500 Power

February 16, 2026

LockBit 5.0 Targets Windows, Linux, and ESXi Systems

February 16, 2026
Most Popular

Nokia Alerts Telecoms to Rising Stealth Attacks, DDoS Surge, and Cryptography Pressures

October 8, 20259 Views

Cyberattack Cripples 34 Devices in Telecoms Using LinkedIn Lures & MINIBIKE Malware

September 19, 20259 Views

Tonic Security Secures $7 Million to Transform Cyber Risk Reduction

July 28, 20259 Views

Archives

  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025

Categories

  • Compliance
  • Cyber Updates
  • Cybercrime and Ransomware
  • Editor's pick
  • Emerging Tech
  • Events
  • Featured
  • Insights
  • Threat Intelligence
  • Uncategorized
© 2026 thecisobrief. Designed by thecisobrief.
  • Home
  • About Us
  • Advertise with Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions

Type above and press Enter to search. Press Esc to cancel.