Published on
THIS ARTICLE WAS CONTRIBUTED BY MICHAEL BURCH FOR CYBERSECURITYTRIBE.COM.
It was hard to find an exhibition booth at RSAC 2025 that did not include Agentic AI somewhere within their messaging. Last year in 2024, we witnessed many people and organizations exploring the theory of how Agentic AI could be used within Cybersecurity. A year later we are now looking at where it can be integrated today and how it has enhanced cybersecurity solutions overall.
As part of the 2025 Cyber Security Tribe annual report we asked our CISO community if their organization make use of Agentic AI within its cybersecurity and 59% stated its a work in progress. This reveals how organizations are now looking at how they can make use of the technology in their cybersecurity defenses. However, the industry has a lack of raw data due to it's infancy and we are still looking at thought leaders to help get the best results from Agentic AI and to learn of its true potential.
We were lucky enough at RSAC 2025 to interview a series of industry experts to explore this issue, "Where exactly are we with Agentic AI in cybersecurity in 2025?"
Contributors include:
Chas Clawson, Field CTO, Sumo Logic
The industry lacks a unified definition, but there are two main views: one focuses on goal-oriented autonomous agents, and the other on collaborative AI with multiple agents and a master coordinator. Regardless of the definition, AI is the future. We're past the hype and into implementation, integrating AI into daily life, from home IOT devices to your car's dashboard.
These agents allow analysts to ask questions without fear of judgment, empowering them to find information instantly. AI will soon permeate all aspects of life, bringing exciting new possibilities.
Vivin Sathyan, Senior Technology Evangelist at ManageEngine
From a company's standpoint adoption of Agentic AI is still limited - I believe only about 10 or 20% have adopted it so far. The rest are still considering or testing Agentic AI because they're not sure about the governance part of it and whether Agentic AI will operate within its intended scope.
I think by the first half of 2026 we'll see a significant increase in actual Agentic AI deployments. For now, most organizations are still in the testing phase, not yet running Agentic AI in production environments. It's likely still in the labs, where they're experimenting to see if it fits their business needs.
Anner Kushnir, CTO at Conveyor
The popularity of Agentic AI for cybersecurity comes down to its ability to move beyond simple automation into autonomous execution of essential security tasks. Information Security, specifically, is seeing AI Agents transform monitoring, detection, proactive security, and revenue-driven compliance at scale. Lean teams – barely able to stay afloat with growth and shifting risk landscapes – are turning to these tools as the key survival tactic to support the business.
The successful implementation of Agentic AI in cybersecurity hinges on two critical factors: accuracy and transparency. By clearly demonstrating the reasoning process, planned steps, and systems engagement, this transparency creates effective human-AI partnerships where humans maintain oversight while the AI handles structured, repetitive tasks at scale. Auditing AI Agent activity is the surest method for building trust in its outcomes and efficacy. In 2025 and beyond, AI Agents are becoming a valuable part of cybersecurity teams – but only if capable of delivering end-to-end outcomes while maintaining the precision required in compliance operations.
Ashley Rose, CEO, Living Security
Looking ahead, the impact of AI on human risk management is a compelling topic. We are currently in the early stages of truly quantifying human risk, particularly as autonomous AI agents increasingly operate in ways traditionally reserved for human users.
This development significantly amplifies risk—potentially by a factor of 100—since these agents browse websites, open application emails, download files, and even submit credentials into phishing sites.
To address this challenge, CISOs should consider expanding their human risk management frameworks with AI-driven analysis and automated risk detection capabilities. Leveraging AI, security teams can continuously monitor, quantify, and mitigate the unique risks posed by autonomous agents, deploying real-time interventions such as training, redirection, or policy adjustments at scale.
Furthermore, AI-enhanced HRM can dynamically assess risk among users interacting frequently with autonomous agents, identifying them as higher-risk individuals and prioritizing targeted engagement. These proactive, intelligence-driven strategies are vital for managing human risk effectively in the emerging era of Agentic AI.
Michael Burch, Director of Application Security, Security Journey
From an application security perspective, a lot of people are trying to solve 'how can I code faster' or 'how can I get code out the door faster'. In addition to this there's been a big push, and I think this is a big shift from where it was last year to now, is how can I automate security processes using AI. Previously we would have to take log data to look for patterns with slower detection, where AI is going to make those patterns and emergence come out a lot faster. However I think the over reliance on AI building the products and code is where we're going to lose oversight.
Unless you have complete confidence in AI to autonomously develop your entire application and manage all associated processes, it is unrealistic to rely solely on it. If the individual responsible for approval lacks the necessary understanding of best practices, it is akin to foregoing a review altogether. It is crucial to have knowledgeable personnel who comprehend what constitutes best practices, as you cannot fully delegate these responsibilities to AI, particularly in the context of building and defensive strategies. This approach is more applicable during the development phase and poses significant risks if not managed properly.
Stuart McClure, CEO, Qwiet AI
In 2025, we are witnessing a specialized AI transformative shift in cybersecurity through the application of Agentic AI, where multiple specialized AI agents work collaboratively to handle different aspects of security operations. For application security, AutoFix is the ultimate goal where AI agents understand the current threats, apply exploit payloads, build unit test plans, analyze reachability and exploitability in code, prevent hallucinations, and fix security vulnerabilities in real-time, dramatically reducing the time from detection to remediation from weeks to mere seconds and minutes - relieving the developer from the burden of security.
Domains like network security can apply AI agents to continuously monitor network patterns and dynamically learn from them to adapt and catch unknown unknown attacks in a sort of adaptive line of defense. In the Identity domain, you could have one agent monitoring user behavior patterns and another analyzing authentication attempts, and another managing privilege escalation requests - together creating a zero trust environment that continuously validates users dynamically. But the application of Agentic AI goes far beyond into the worlds of threat intelligence, cloud security, incident response and endpoint security as well where multiple agents work together to monitor behavior throughout the system that dynamically responds to new and emerging threats without signatures or policies or heuristics or algorithms.
The future of cybersecurity in the sphere of Agentic AI is truly wide open, and maybe a bit too wide. With the bad guys leveraging the AI to find new threat vectors at the speed of compute, very little will be able to detect much less prevent the adversary, other than AI. In 2025 we will see the beginnings of agents being applied to cyber and in 2026 and beyond we will see if flourish to understand the threat patterns and attack methodologies of the adversary, sharing insights across networks and organizations to create a collective defense mechanism. This collaborative intelligence, combined with human oversight, will finally give us a realistic shot at preventing the 99.999% of todays and future cyberattacks, something I've been working toward throughout my entire career.
Mark Lambert, Chief Product Officer, Armorcode
Agentic AI adoption is surging—Model Context Protocol (MCP) kicked it off about six months ago. The potential is huge, but many still wrestle with putting it to work. Success hinges less on the technology itself and more on the real-world outcomes it delivers.
At RSAC last year, we rolled out AI correlation built on supervised ML to cut noise and reveal root causes, then at Black Hat we launched a pre-trained LLM to power remediation guidance. Working hand-in-hand with customers, we uncovered a persistent hurdle: extracting real insight from dashboards to answer questions like “what’s driving shifts in risk scores?” This year, our Agentic AI bridges that gap, solving real problems, not just showcasing tech.
Oleg Vusiker, CTO, Salvador Tech
Agentic AI helps address today’s shortage of cybersecurity employees by automating routine security tasks, detecting threats, and responding automatically to incidents. For example, it can autonomously monitor networks 24/7, instantly isolate compromised systems, and patch software vulnerabilities without human intervention.
This reduces the workload on cybersecurity teams, allowing fewer specialists to manage more complex environments effectively. It also helps fill skill gaps, letting existing staff focus on strategic activities such as threat hunting, security planning, and incident response, rather than repetitive tasks.
Lawrence Gentilello, Founder and CEO of Optery
Attackers are weaponizing agentic AI to orchestrate highly personalized campaigns that outpace traditional defenses. Threat actors are combining autonomous decisionmaking with vast troves of personally identifiable information (PII) to conduct attacks. They’re automating the collection of data on targets across a range of sources, such as the dark web, social media, public records, and data brokers. Once ingested, this data fuels hyper‑targeted phishing and social engineering attacks. AI‑generated lures can adapt in real time, and adjust language, tone and imagery based on deep knowledge of the target.
Agentic AI can also enable large‑scale PII-driven identity fraud. Autonomous bots use stolen personal data to complete KYC checks, open fraudulent accounts, and launder funds, all at lightning speed. Deepfake voice assistants impersonate bank representatives, executives, or government employees, bypassing voice‑biometric systems by training on publicly available audio samples.
Defenders must respond in kind, eradicating exposed PII data, and employing “AI‑against‑AI” deepfake detection techniques. By understanding how adversaries exploit PII, organizations can better defend themselves.
Itai Tevet, CEO, Intezer
I think that the major place where it has impact is on solving the talent shortage problem, which has been a problem for decades and hasn't been solved, and suddenly there is new exciting technology that can make a very big difference in that challenge. And personally, this is a challenge that I faced myself. I used to lead the SOC and incident response team for the IDF for quite a long time. My main challenge was I had way too many incidents and not enough people on my team. There were attempts of automation and so on, but they couldn't really capture the human decision making process that is required in order to really automate things to the fullest, thus solving the final shortage. And then with Agentic AI, suddenly it introduces a way to mimic the human-like decision process.
In my assessment, Agentic AI should not be viewed as a differentiator among companies, much like the internet today, it is a technology that should be integrated into nearly every use case. The pertinent question is, what function does the AI agent serve? For instance, my AI agent functions as an AI SOC analyst.
Megha Kalsi, Partner, AlixPartners
The Cybersecurity industry has been using machine learning in tools and technologies for the past decade. A few years ago, we started embedding Artificial Intelligence into our tools, but they were using supervised learning, which learned via labeled data. As a Cybersecurity industry, we have been waiting for technology that will strengthen our threat detection, containment and remediation capabilities further, as we’re defending against increased and rapid attacks. Agentic AI is the answer to the next phase of advanced cybersecurity tools and technologies that will shape cyber in 2025.
The beauty of Agentic AI is it uses unsupervised learning to identify patters, structure, and relationships between data that is not labeled. It can parse through a tremendous amount of data in a short period of time to augment a cybersecurity professionals’ daily activities and narrow down the legitimacy of threats rapidly. In addition to detection, Agentic AI can be used to contain threats by preventing attackers from moving laterally in the environment and reducing the blast radius of the attack. For example, if malicious activity is detected by an endpoint, the endpoint can be isolated and any virus, malware, trojan on the endpoint can be stopped from spreading to other systems on the network. Finally, Agentic AI can be used for remediation, which includes taking an action to neutralize the potential threat. An example of this may be to spin up a new and clean system from a back-up, patching the system, or remove unauthorized or malicious files.
The battle of intelligent agents is upon us, and we need Agentic AI in Cybersecurity to defend ourselves. We are on the verge of seeing “defender AI” and “attacker AI” and Agentic AI technology will make this possible. Agentic AI will continue to shape the Cyber industry, so we can respond to attacks and potential threats at a faster speed, since adversaries will already be using them. Multiple agents will be sharing real-time intelligence to make coordinated and accurate decision to enhance cybersecurity responses. However, at the end of the day, the human factor of approving any decisions made by Agentic AI continues to be top priority.
Ankur Shah, CEO and Co-Founder of Straiker
Agentic AI systems mark a turning point where tasks have evolved from triggered responses to self-learning agent orchestrators that can reason, plan, and act autonomously. With read/write access and autonomous capabilities, agents are emerging as powerful tools and significant security risks. Organizations embracing AI agents face new and expanded categories of risks, from mass data exfiltration and supply chain attacks to autonomous chaos, which I explain as unpredictable behaviors from AI autonomy.
The transition from traditional applications to agents is a monumental shift. No longer just passive tools built on programmatic business logic, a user can describe a task or goal to an agentic application, all in natural language, that can then operate autonomously, execute complex workflows, and integrate with other tools or APIs. This fundamentally changes the scope of possible exploits and demands that we rethink cybersecurity in the AI age.
Ivan Novikov, CEO of Wallarm
There are some obvious ways that Agentic AI is shaping cyber security. We’re seeing new types of exploits and new targets. Researchers and attackers are always fascinated with new technology that they can hack and compromise, and Agentic AI is no exception. There is plenty to be said about the variety of prompt injections and jailbreaks that we’re seeing now and going to see in the future with Agentic AI. No doubt there will be newsworthy incidents where an AI agent was the vector for an attack.
But there is another impact of Agentic AI that is perhaps more important than the direct, obvious consequences, and that’s the exponential increase in the use of APIs. AI agents are API-driven. They run on top of APIs. Users interface with them over APIs. They connect to other agents and systems via APIs. So as the use of AI agents increases, API usage increases exponentially, with every agent spawning more APIs and more agents that spawn more APIs, etc. All of that increased API usage drives a dramatically larger API attack surface. We now have to worry about new AI attacks (which are really API attacks) and all of the existing API attacks that apply to this Agentic AI landscape.
Prompt injection is a new problem for AI agents, but SQL injection is also an old problem that will continue to impact Agentic AI infrastructure. The end result is not so much an evolution of threats, but an aggregation of threats. Cyber security professionals just have to address more types of attacks against more types of targets.
Erez Tadmor, Field CTO, Tufin
Agentic AI is starting to reshape the cybersecurity landscape by acting less like a tool and more like a teammate. These systems don’t just follow predefined scripts; they understand intent, interpret context, and take goal-driven actions. That shift is proving critical in cybersecurity, where speed, accuracy, and alignment with policy can’t be compromised. In 2025, we’re seeing these agents embedded directly into security workflows, reducing response times, removing human bottlenecks, and helping teams make smarter decisions under pressure.
One area where this shift is particularly powerful is in securing access across complex, hybrid networks. Troubleshooting connectivity issues or enabling access between workloads traditionally involves long back-and-forths between application owners, network engineers, and security teams. With agentic AI, those interactions are condensed into natural language queries and precise, policy-aware responses. This kind of intelligence helps teams move faster without sacrificing governance, and it empowers non-experts to get answers securely and independently.
More broadly, the promise of agentic AI is its ability to align execution with intent. In cybersecurity, that means ensuring every change, every access decision, and every enforcement action is not only automated - but also explainable, compliant, and traceable. It can turn automation into autonomy and add accountability to the mix. I think the most valuable solutions will be those that give security teams confidence in their decisions while enabling them to move fast. This will lead to a shift from reactive firefighting to a more proactive strategy.”
Pieter Danhieux, Co-Founder & CEO, Secure Code Warrior
The relentless rise of generative AI in software creation has indeed forced a new reality on software engineers, and they are living in a reality in which writing code -- the traditional territory of software developers for as long as software has existed -- is being quickly overshadowed by AI. Beyond writing secure code themselves and assessing the code output of AI tools, there is no argument that developers' roles will change in other ways… especially over the next five years, despite OpenAI's recent announcement of its Agentic Software Engineer (A-SWE) model.
Job displacement should not be a concern, unless developers aren’t making any effort to up-level their own skill sets or learn how to leverage AI effectively, securely and responsibly. If I were a developer right now, I would focus on learning and testing what AI cannot do, or what it is weakened by. Critical thinking and in-depth security awareness will always be “in fashion”, and are sorely needed when operating AI coding agents. However, it is critical that we do not lose sight of the importance of secure coding practices, especially in software development. The industry needs to consider AI a companion-style tool, rather than a direct replacement for an experienced, human developer. In fact, security-aware developers who demonstrate expertise in safely leveraging AI tools eventually will be able to take on new oversight roles as AI guardians or mentors, working instead with AI rather than against it.
These elevated, “next-gen” developers are crucial, and now is the time to ready the development cohort to leverage AI effectively and safely. It must be made abundantly clear why and how AI/LLM tools create unacceptable risk, with hands-on, practical learning pathways delivering the knowledge required to manage and mitigate that risk as it presents itself in their workday. Anything less, and developers may not truly grasp the inherent danger of their actions, nor can it be effectively managed.Vibe coding, agentic AI coding, and whatever the next iteration of AI-powered software development will be are not going away, and they have already changed how many developers approach their jobs."
Chip Witt, Principal Security Evangelist, Radware
Industry analysts are abuzz about agentic AI and its potential to reinvent businesses end-to-end, shifting the focus from siloed applications to integrated systems. In cybersecurity, this promises to fill human skills gaps and accelerate threat response through autonomous decision-making, action, and adaptation. Agentic AI will push security tools beyond what machine learning, LLMs, or RAG-based architectures have achieved, enabling real-time threat mitigation with minimal human intervention.
As this technology matures, the role of human security practitioners will shift. The emphasis will move toward protecting the data and communication channels that agentic AI relies on, and ensuring adversaries can’t “poison the well” that powers the business. This transition will happen in stages – only some will materialize in 2025 – but the rapid pace of AI development suggests the future is arriving faster than most expect. Fortune favors the bold.
The “scary” side is that cybercriminals and hacktivists will also benefit from these seismic shifts. Agentic AI can autonomously conduct vulnerability scans, exploit weaknesses, launch sophisticated phishing campaigns, and orchestrate bot swarms – all while adapting tactics in real time. Human-AI collaboration will be key to success. As a result, the ability to learn adaptively, leverage intuition, and demonstrate creative problem-solving – skills that have long-defined successful cybercriminals – will now be essential for anyone operating in this new landscape.
Karthikeyan Nathillvar, Head of Data, AI & SaaS, Nile
One of the most significant applications of Agentic AI is threat detection. Traditional security solutions require manual updates of rules and device fingerprints data to keep up with new threats, which is oftentimes reactive and prone to error. Even ML based security solutions are built with a defined dataset and do not respond in time to data or model drifts.
Agentic AI based systems on the other hand are not static in nature. They are not limited to any dataset. They are capable of gathering and processing data from different systems without a need for expensive retraining. Agentic AI based Cybersecurity architecture may follow the steps below to detect unusual network activity and autonomously isolate affected devices.
Plan threat detection steps using different reasoning strategies such as chain of thought reasoning,
Analyze the result of their detection through self-reflection,
Augment the data that is being analyzed by threat signature data from other data sources
After detection, agentic AI based cybersecurity systems may respond to the detected threats by triggering predefined threat responses using agentic execution workflow. The system can notify team members and initiate rollback procedures ensuring that all relevant details of the threats are captured and tracked.
Merritt Baer, CISO, Reco
Security teams are drowning in alerts, with little context for what they mean and how to take action. Each alert might signify a potential risk, but with so many coming in it’s nearly impossible to tell the ones that matter from the ones that don’t. This means security humans spend hours manually investigating alerts, identifying patterns, and tying together data points from disparate systems.
And right now, contextualization is key. This in part we have more data than ever, and in part because of the fact that attackers almost always gain access through valid credentials. So it doesn’t show up as some shiny, sexy, obvious break-in. Identifying and prioritizing bad behavior is actually correlating data points about behavior and access patterns—in other words, contextualization. Yet, traditional Security teams spend more time sorting through noise than elegantly identifying and prioritizing risks. The result is that critical threats can slip through the cracks.
Agentic AI offers hope to overtaxed, understaffed Security teams looking for better signal in the noise. Unlike AI Assistants, which can respond when prompted and automate said tasks, Agentic AI can actively hunt for vulnerabilities, independently analyze context, make decisions about the severity of threats, and offer the most applicable countermeasures. This helps address cognitive overload and also gives better fidelity, especially as humans appropriately tune it over time.
In 2025, we will see a race amongst vendors to integrate Agentic AI into our security tools to raise their effectiveness. The vendors that succeed at offering true Agentic AI – not just Copilots or Assistants – will emerge top of their categories.
Aaron Shilts, CEO, NetSPI’s
We’re seeing cybersecurity shifting away from reactive defense and toward proactive protection – in large part, thanks to agentic AI. For years, security teams have been constrained in their capabilities by endless alerts, siloed tools, and manual triage. Unfortunately, this also means that it’s common for threats to get addressed only after they’ve surfaced and committed damage. With agentic AI, however, the tides are changing. AI agents can identify vulnerabilities earlier, adapt defenses in real time, and even respond to incidents on their own. Perhaps most importantly, by handling repetitive and routine tasks, AI is enabling cybersecurity professionals to refocus on more complex, higher-value business challenges where their particular skillsets provide the most power: areas that demand critical thinking, creativity, and domain expertise.
While the enormous potential of agentic AI is exciting, it’s important to highlight an uncompromising caveat: we can’t move forward unless these systems are built on a foundation of trust. When it comes to security, business needs are not satiated by speed alone. Enterprises need to know that AI systems are secure, accountable, and acting within established guardrails. Handing key pieces of a business’s security posture over to autonomous agents should not be taken lightly – it requires full visibility, strong governance, and constant checks and balances. This is not just a technical issue. Businesses must ensure that every part of their organization (and just as importantly, every partner they work with) is committed to transparency and discipline. Without complete visibility and tight adherence to security hygiene, skepticism and concern around AI in security will continue to hold adoption back.
In my view, the real value of agentic AI isn’t about replacing people: it’s about putting them in a position to upskill and excel. Businesses should focus on extending human talent – not eliminating it – and ensuring all AI systems keep humans in the loop, can deliver full auditability, and easily integrate with existing workflows. If companies move forward accordingly, we will see security teams move faster, stay stronger, and proactively fend off threats versus scrambling to remediate after an attack. This is a future that’s attainable and the one we should be aiming to achieve.
Sandy Kronenberg, CEO, netarx
We’re still very much in the first inning of AI’s impact on the cybersecurity industry. In 2025, I’ve been seeing vibe coding become more commonplace. Also, organized hacker groups are leaning on AI tools for rapid development and contextual automation of social engineering phishing, vhishing, and smishing vector attacks aimed at a widening scope of targets. As a result, I’m expecting an increase in the success and frequency of smaller technology enhanced social engineering attacks that steal less than $1 million to avoid FBI involvement.
Kern Smith, VP of Global Solutions, Zimperium
I think the biggest thing we have seen is that AI is removing the barrier to entry, especially from the attacker standpoint. This has led to an increase in the volume of attacks.
From a cybersecurity standpoint it's about how do you get out of reactive mode and how do you get proactive and start staying ahead of the attackers. So instead of relying on specific check for a specific type of threat, which is always going to evolve, you should ask 'how do I look at the fundamental understanding of what good and bad looks like'.