GARTNER SECURITY & RISK MANAGEMENT SUMMIT – Washington, DC – Interest in agentic AI among decision-makers appears to be sky high, even though significant security concerns remain.
This week, Gartner held its Security & Risk Management Summit for executives, chief information security officers (CISOs), and their security teams. As expected from any security trade show since late 2022, generative AI was a massive focus in conference sessions and on the show floor.
But in addition to the automated threat intelligence and vulnerability remediation capabilities seen for a few years now, there was major emphasis on agentic AI, which features privileged user-facing instances or agents that have “memory” — meaning they make decisions based on previous behavior. These agents automatically take actions based on data gathered from the environment, generally in the wheelhouse of vulnerability remediation, compliance, threat detection, and incident response.
Right now, the AI agents more or less fit in as an appendage to the security operations center (SOC), taking on some of the low-hanging fruit and repetitive tasks to free up bigger jobs for human analysts. However, like every AI product that has entered the security market in the past few years, onset has been rapid. But there are plenty of concerns as well.
Agentic AI Fervor
It was not uncommon to see multiple AI-themed presentations overlapping with each other on the summit’s schedule, suggesting, if nothing else, a considerable interest in these products among decision-makers.
Paul Proctor, vice president and distinguished analyst at Gartner, tells Dark Reading that “client interest is through the roof” regarding agentic AI, and “everybody’s asking about it.”
Gartner today published a press release detailing results from a recent poll of “147 CIOs and IT function leaders.” According to the results, 24% had already deployed at least one but fewer than a dozen AI agents, while 4% of respondents had deployed more than a dozen. Meanwhile, more than 50% said they were researching or experimenting with the technology.
Jeff Barker, senior vice president of product management and marketing for penetration testing firm Synack (which utilizes AI agents as part of its product’s scoping processes), explains that in the security space, which struggles with both staffing and budgetary constraints, buyers and sellers see the agents as an opportunity to better cover the attack surface.
Asked whether the current state of security agents is primarily a repackaging of what came before, Barker concedes that it appears that way right now, but adds that it’s likely to change as offerings innovate.
“We’re not at the point where they’re learning and adapting like a human does,” he says. “I think, initially, what people will see will look like a lot of repackaging. But we’re on a path that will get beyond the repackaging to be able to do new and interesting down the road. Step one is to take something that was difficult, time intensive, and human centric, and automate it to the point where it scales and we can build on it.”
Weighing In On Concerns
As current AI agents are a consolidation of previous LLM-powered security technologies, prompt injections remain a particularly dangerous threat, combined with the inherent access these agents need to do “their” jobs effectively.
Rich Campagna, senior vice president of products at Palo Alto Networks, says threat actors have set their sights on misusing agent permissions in order to get them to do the attacker’s bidding inside the target organization. This, he says, can lead to damaging attacks if the attacker can gain admin permissions to something like a CRM.
Another attack vector is what Campagna referred to as “memory manipulation,” or getting the agent to remember something it shouldn’t so that the attacker can override guardrails. “With agents, there’s a combination of a really big looming permissions issue because, oftentimes, these internal systems aren’t built with these kinds of well-structured permissions,” he says.
It makes sense, then, that there is an emerging space for “guardian” or “ambient” agents, which are secondary agents that ensure other AI agents are adhering to existing policy controls. In other words, AI agents that watch other AI agents. In the aforementioned Gartner release today, the analyst firm predicted guardian agents will hold 10% to 15% of the AI agent market by 2030.
Marla Hay, vice president of product management for security, privacy, and data protection at Salesforce, says that the company is focused on treating AI agents as entities with permissions that need to be considered. Salesforce is one of several technology giants that has invested heavily in agentic AI.
“The things we’re looking at are, how do we help our customers with things like zero trust and least privileged access?” Hay says “We have really tight and granular permission levels, so it’s not only making sure those are applied to agents like they would be to people, but also helping our customers track agents of people side by side.”
Gartner’s opening keynote on Monday discussed the hype in the security space and how it can be turned into opportunity for security teams. Roughly half of the discussion was dedicated to the evolution of AI security tools in recent years. Gartner distinguished vice president analysts Leigh McMullen and Katell Thielemann pointed out that even as many security analysts and CISOs may be jaded by AI already, it cannot be ignored, as it’s clear decision-makers are aggressively pursuing these tools regardless.