AI, Adversaries, and the Modern Attack Surface: Why Continuous Security Validation Can’t Wait
Posted by: Conor Murphy
Continuous security validation has become an operational necessity in the face of artificial intelligence (AI) driven cyber threats. Adversarial tradecraft has evolved to incorporate AI in ways that expand beyond vibe coding toward full operationalization of the kill chain. Traditional security playbooks built on once-a-year penetration tests and monthly patch cycles can’t keep pace with AI-enabled threat actors operating at machine speed across the full attack lifecycle. Additionally, the modern attack surface is no longer just a list of assets — it’s a dynamic, interconnected system of identities, applications, infrastructure, integrations, and data flows.
TL;DR: Modern adversaries are using AI to move faster and operate at greater scale. Meanwhile, the emergence of advanced AI models, such as Anthropic’s Claude Mythos, signal additional evolution in adversary capabilities and attack surface growth, driving the need for continuous security validation.
- Traditional security approaches built on periodic testing and patch cycles can’t keep up with AI-driven threats and advanced AI models that uncover and expose vulnerabilities at scale.
- Organizations need to move toward continuous security validation (CSV) to keep up with modern attack surfaces.
- CSV is a programmatic, ongoing approach to testing and hardening the IT environment that combines automation and human expertise to stay ahead of an ever-evolving threat landscape.
This new paradigm of AI-accelerated threats demands an evolution from maintaining a list of vulnerabilities to be patched to managing actual attack paths and business risks. It is time to move away from periodic, point-in-time testing and toward continuous security validation.
What is the Role of AI in Modern Cyber Threats?
AI has lowered the barriers to entry for adversaries, enabling less experienced actors to operate with the speed, scale and effectiveness of more advanced actors. Understanding how adversaries operate is the first step to building a defense that can keep up with them.
Let’s start with a few notable methods for how adversaries are leveraging AI and the battle that security teams are up against:
Significant Breakout Time Reduction
One of the most alarming trends for security practitioners is the collapse of breakout time – the time it takes an attacker to move laterally from an initial point of entry to other systems on the network. According to CrowdStrike’s 2026 Global Threat Report threat actor breakout time on average has fallen to an average of 29 minutes in 2025, with the fastest breakout being under 30 seconds.
Autonomous Agency
Attackers have moved beyond using AI as a productivity aid to integrate large language models (LLMs) directly into adaptive malware via APIs. PROMPTFLUX and LAMEHUG present different early examples of this: PROMTFLUX periodically rewrites and obfuscates its VBScript code to evade detection, while LAMEHUG queries an LLM at runtime to generate Windows commands for reconnaissance and data exfiltration, with no malicious commands hardcoded in advance.
Identity-first Offense
In 2025, 82% of attacks were carried out without malware. Now, adversaries increasingly use AI to refine credential theft and simply log in rather than ‘break in’.
Adversaries have been moving toward deeper AI integration across the attack lifecycle for some time. Across the industry, threat intelligence data shows that AI has moved from a supplemental tool to a core part of how adversaries plan and execute attacks. AI is now embedded in tooling development, setting up infrastructure and preparing access before an attack even begins. But preparation is only part of the picture.
In at least one case study, 80-90% of operator actions during an active attack were autonomous tool calls. This highlights a shift in adversaries not only leveraging AI to prepare for an attack, but using it to run the majority of the operation with humans intervening only at strategic decision points. This transition to autonomous operators with human oversight has a direct implication for defenders. Security programs are no longer pacing their defenses to the speed of human adversaries, they are racing against machine-speed threats.
The Mythos Moment
On April 7, 2026, Anthropic announced Claude Mythos Preview, a limited-access AI model capable of autonomously finding and exploiting thousands of zero-day vulnerabilities across major operating systems and browsers.
Notable claims include:
- Vulnerability discovery: uncovering decades-old flaws in widely used operating systems and web browsers.
- Speed of discovery: identifying and validating vulnerabilities at a pace that traditional security tooling and processes weren’t designed to match.
- Exploit chaining: constructing multi-step attack chains that combine individual weaknesses.
Mythos represents a notable evolution in what AI models can do, but it isn’t operating in isolation. Models and research frameworks with comparable offensive security capabilities have been emerging across the industry. As AI accelerates both the discovery and weaponization of vulnerabilities, the traditional frameworks organizations rely on to manage risk — severity scoring, patch prioritization windows, predicting where an attacker will come from — are no longer suitable for the pace and scale of AI-driven attacks.
According to Recorded Future, total disclosed vulnerabilities rose from roughly 21,000 in 2021 to nearly 50,000 in 2025. And as Google’s Threat Intelligence Group has noted, the traditional window between disclosure and active exploitation has largely vanished. There are increasingly more findings, less time to act, and less confidence in which findings actually matter. That is the compounding challenge AI-accelerated vulnerability research creates for defenders.
The question on the minds of many security leaders is how to prepare for what comes next. But this isn’t a new question. The pressure to move beyond traditional security approaches has been building for years. Mythos didn’t create that urgency, it confirmed it. What AI has done is remove any remaining doubt that the shift to a more proactive, continuous approach to security is no longer optional.
Why are Traditional Security Approaches Breaking Under AI?
AI is presenting an explosion of known vulnerabilities that will grow backlogs by orders of magnitude. At the same time, AI allows adversaries to weaponize enterprise patches faster than organizations can deploy them. The National Institute of Standards and Technology (NIST) recently signaled a structural challenge with the sheer volume of vulnerability disclosures, announcing that its National Vulnerability Database (NVD) can no longer enrich every submission with severity scores and metadata. According to NIST, “CVE submissions surged 263% between 2020 and 2025, and the first quarter of 2026 is already trending 33% higher than the previous year”.
This is one clear signal that the traditional patch race is no longer a viable strategy. The shift to threat-informed continuous security validation is not a future consideration — it’s a present requirement.
Building a Continuous Security Validation Program
Within the past year, GuidePoint Security has seen a meaningful shift in how organizations approach security testing. The traditional model of testing your environment once or twice a year is no longer enough when new threats emerge daily and the attack surface is constantly evolving. Today, organizations are moving toward continuous security validation. This is an ongoing, programmatic approach to testing and hardening their environment against real-world threats, rather thana point-in-time snapshot.
What does that look like in practice? It means building a program that combines the right mix of technology with human-in-the-loop validation to continuously test and improve security posture. Some of the key components we see organizations leveraging include:
- Automated Penetration Testing Platforms: enabling consistent, scalable testing of your environment against vulnerabilities, misconfigurations, and attack paths without waiting for an annual engagement.
- Breach and Attack Simulation (BAS): continuously simulating adversary tactics, techniques, and procedures (TTPs) to validate whether your controls are actually working the way you think they are.
- End User Awareness Training and Remote Social Engineering: your people are part of your attack surface. Continuous phishing simulations and social engineering exercises ensure your workforce remains a line of defense, not a liability.
- Human-Led Vulnerability Pipelines: an emerging approach where security professionals build and operate workflows that leverage autonomous AI agents to discover vulnerabilities at a speed and scale previously out of reach. Unlike passive scanning tools, these pipelines can reason, plan, and operate in real time — continuously testing environments from web applications to firmware and network infrastructure, while humans maintain strategic control, oversight, and final validation of findings.
- Ongoing Access and Permissions Validation: regularly testing and tightening identity controls, privilege escalation paths, and access policies to reduce the blast radius of a potential compromise.
What is the Advantage of Continuous Security Validation?
The advantage of this approach is that it allows your organization to test against new threats as they emerge, not threats from six months ago. When something like Mythos drops, or a new CVE is actively being exploited in the wild, a continuous validation program gives you the ability to quickly assess your exposure and act on it.
Automation is a critical enabler here. It provides the speed and scale needed to keep pace with today’s threat environment. But automation alone isn’t enough. Human expertise remains essential for understanding context, chaining complex attack paths, validating nuanced findings, and translating technical results into business risk. The most effective programs combine both: automation for coverage and consistency, and experienced practitioners for depth and judgment.
Regardless of the approach you take, the underlying principle is the same: security testing should be continuous, threat-informed, and tied to your most critical assets and business risks.
The Bottom Line on AI and Continuous Security Validation
Sole reliance on point-in-time penetration testing is no longer sufficient in the face of AI-driven threats and rapidly evolving attack surfaces. From the organizations that have adopted a continuous validation approach to security, we have seen them achieve:
- Small, deliberate adjustments to their program rather than scrambling when news like Mythos drops.
- Clarity for security and IT teams about where to focus their time and efforts.
- Decreased friction between IT and Security teams due to evidence-based results that show true risk.
- Effective messaging to executives and the board about cyber risk.
The goal is to build that continuous validation muscle now so that as the threat landscape shifts – and it will – your organization is already prepared to handle whatever comes next. If you haven’t yet built a continuous security validation program, or you want to ensure your existing program is structured for what’s coming, download our eBook, Modern Penetration Testing: The Evolution to Continuous Security Validation. Inside, you’ll learn questions to ask prospective testing partners, red flags to watch for, and additional tips, information and considerations that can help narrow the search for the right partner.
Conor Murphy
Senior Attack Simulation Engineer,
GuidePoint Security
Conor Murphy is a Senior Attack Simulation Engineer at GuidePoint Security, specializing in offensive security and continuous security validation. He works directly with organizations to stress-test their defenses, validate attack paths, and build programs that hold up against today's threat landscape. His background spans cybersecurity operations, insider threat investigations, and program development within the financial industry, giving him a unique perspective on both the offensive and defensive sides of cyber risk.
Conor holds a Bachelor of Science and Master of Science in Administration of Justice and Homeland Security, along with a Graduate Certificate in Cybersecurity and Intelligence.