The Vulnerability Management Industrial Complex

by

Twenty years building an industry around the exercise. AI just made the bill come due.

A number that shouldn't exist

Vulnerability management is the most mature, most tooled, and most regulated discipline in security. It is also the only one that has gotten measurably worse at its job over the last five years.

In 2020, the average time to remediate a software vulnerability was 171 days. In 2025 it was 252 days — a 47% increase in five years, and 327% longer than when Veracode published its first State of Software Security report fifteen years ago.

That is not the trajectory of a discipline improving. It is the trajectory of a discipline that has confused its job with its activities.

Over the same five-year window, the industry rolled out — and security teams spent against — Risk-Based Vulnerability Management. EPSS. Exposure Assessment Platforms. CTEM frameworks. AI-assisted triage. New scanners. New normalization layers. New ITSM integrations. New dashboards. New analyst categories. And the headline metric got worse.

Meanwhile, Verizon's Data Breach Investigations Report — working from 22,052 cyber incidents and 12,195 confirmed breaches investigated last year — observed that vulnerability exploitation as an initial-access vector tripled in two years. Mandiant, working from over 450,000 hours of incident-response engagements, found that three of the four most-exploited vulnerabilities of 2024 were zero-days — attackers weaponized them before patches existed.

We are spending more, working harder, deploying more sophisticated machinery, and losing ground. That is the puzzle. The puzzle has a name. It is the vulnerability management industrial complex — and AI is about to force every security leader to confront what it actually is.

A house of cards built on the wrong foundation

The original question was simple: Don't let us get owned by an exploit against a known weakness.

The industry's answer to that question evolved, over thirty years, into a stack:

  • Scanners to discover findings.
  • CVSS to score findings.
  • Tickets to route findings.
  • Spreadsheets to track tickets.
  • RBVM to re-rank findings with exploit and threat intel.
  • SLA dashboards to report on tickets closed on time.
  • CTEM, EAP, ASPM, CAASM as new umbrellas over the same stack.

Each layer was a fix for the layer below. None of them moved the original metric — is this asset actually exploitable, and how reachable is it from somewhere an attacker can stand?

Look at how independent research describes the result.

The Cyentia Institute, analyzing 3.6 billion vulnerability observations across roughly 300 organizations, found that a typical organization — regardless of size — has capacity to remediate about one in every ten vulnerabilities per month. Half of organizations are falling behind on high-risk vulnerabilities. Only about 17% are keeping pace with new ones.

FIRST — the independent body that maintains EPSS — observed in its own model documentation that only 2.7% of all CVEs with CVSS 3.x scores were exploited in a given 30-day window. The strategy of "remediate everything CVSS 7 and above" required organizations to do 57.4% of all the possible work for a 3.96% efficiency rate. Roughly 96% of effort, by that math, was spent on vulnerabilities nobody was trying to exploit.

Verizon found that for the edge devices and VPNs that drove the spike in vulnerability-based intrusions, only 54% of those vulnerabilities were fully remediated within a year, with a median patch time of 32 days against a threat landscape where weaponization is measured in days.

Gartner notes that despite seven years of CTEM advocacy, fewer than 25% of organizations have adopted any modern attack-surface technology, and far fewer have operationalized exposure management as a program.

The activity has been intense. The outcome has been static.

This is what makes it an industrial complex, not just a bad practice. Every layer of the stack has constituents whose careers, KPIs, vendor contracts, audit findings, and compliance certifications depend on its continued existence. PCI DSS, SOC 2, HIPAA, ISO 27001, FedRAMP, and NIST SP 800-53 all specify the exercise — scan cadence, patch within N days, evidence of triage — not the outcome. An organization that decides "we'll stop measuring tickets-closed-within-SLA and start measuring paths-to-crown-jewel-closed" doesn't just have to change its tooling. It has to renegotiate every external attestation written in the language of the old model.

So most organizations don't. They buy another layer. The dashboard gets prettier. The number gets worse.

RBVM was a brilliant answer to the wrong question

In the mid-2010s the industry collectively diagnosed the problem as "too many findings to triage." Risk-Based Vulnerability Management emerged as the cure: instead of remediating everything, use threat intelligence and exploit prediction to prioritize what matters most.

The diagnosis was wrong.

The problem was never that we couldn't prioritize the list. The problem was that the list itself was a model of the wrong thing. A list of CVEs on hosts tells you what software is on what machine. It does not tell you whether that machine is reachable, whether the vulnerability is exploitable in this configuration, whether compensating controls neutralize it, or whether the asset is connected — by a chain of identities, permissions, and trust relationships — to anything an attacker would actually care about.

The FIRST EPSS data is the clearest proof. CVSS-based prioritization captures 82% of exploited vulnerabilities at 3.96% efficiency. EPSS does better by being smarter about ranking — but it is still ranking the wrong unit of work. A perfectly prioritized list of findings is still a list of findings.

The smarter the prioritization got, the more comfortable the industry became with the framing — that the work was about ordering findings, not about engineering systems for non-exploitability.

AI didn't break vulnerability management — it revealed it was already broken

You've read enough about Mythos in the last month to last a career. The short version: Anthropic's frontier security model autonomously discovered thousands of unknown vulnerabilities during roughly a month of internal testing, including kernel-level bugs that had survived 17 and 27 years of human review, fuzzing, and audit. State actors will reach equivalent capability. The only question is when.

The temptation is to read this as a step-change in attacker capability that defenders now need to "respond to." That is the wrong read. The right read is that AI removes the only thing that ever made the existing model survivable: the bottleneck on the attacker side.

For decades, the math worked roughly like this. There are millions of latent vulnerabilities in deployed software, but it takes a competent researcher weeks or months to find and weaponize one. So the attacker side of the ledger produced, generously, a few thousand new credible threats per year. The defender side, with industrial-scale machinery, could process tens of thousands of findings per year. The math was uncomfortable but not impossible.

AI inverts that ledger. The Cloud Security Alliance's AI Vulnerability Storm paper — lead-authored by Gadi Evron, Rich Mogull and Robert T. Lee, with contributions from Bruce Schneier, Jen Easterly, Chris Inglis, Rob Joyce, Heather Adkins, Katie Moussouris, Phil Venables, Sounil Yu, James Lyne, John N. Stewart and many other senior security leaders — states flatly that "the window between discovery and weaponization has collapsed into hours," that "the CVE system may not scale" to AI-generated discovery rates, and that "we cannot outwork machine-speed threats."

The supporting data is unambiguous. The Zero Day Clock dataset the paper cites — 3,533 CVE-to-exploit pairs drawn from CISA KEV, VulnCheck KEV and XDB — tracks mean time-to-exploit falling from 2.3 years in 2018 to roughly 9 hours in 2026. Mandiant separately observed more than a dozen threat groups exploiting CVE-2024-3400 (Palo Alto PAN-OS) within two weeks of its disclosure. Patch-Tuesday cadence was never going to compete with this. RBVM prioritization wasn't going to compete with this. The exercise model wasn't going to compete with this.

The math no longer works. That isn't a future problem. That is a current-state observation.

VulnOps is the wrong analogue

The dominant industry response to Mythos crystallized in the CSA paper: build VulnOps as the vulnerability-side analogue to DevOps. Stand up a permanent operational capability. Automate triage. Match attacker speed with defender speed. Process findings faster.

It is a sophisticated answer, written by serious people. It is also the wrong analogue.

DevOps did not succeed because it made operations faster. DevOps succeeded because it collapsed two functions — development and operations — into a single shared outcome: working software in production. It changed what success meant. Ops stopped being scored on uptime-of-the-thing-Dev-threw-over-the-wall. Dev stopped being scored on features-shipped-regardless-of-whether-they-work. Both teams started being scored on the same outcome.

VulnOps, as currently framed, doesn't do that. It accelerates the existing exercise. It still scores success in findings-processed, tickets-closed, MTTR-on-the-ticket-system. It still treats the unit of work as "a CVE on a host," and the unit of success as "that CVE was acknowledged within N days."

A faster exercise is still the wrong exercise.

The right analogue isn't VulnOps. It is something closer to exposure engineering — a discipline that designs and operates systems with non-exploitability as a first-class property. The unit of work is not a finding; it is an asset's exposure path. The unit of success is not a closed ticket; it is a closed path. The team isn't a vulnerability-triage SOC adjacent to operations. It is an integrated function in which security, infrastructure, and application engineering own the same metric: no asset that matters has a viable path from an attacker-reachable position.

Exposure engineering requires three things the current stack doesn't have:

  1. An asset model where relationships are first-class. A finding-list can't tell you a path. A graph can.
  2. A risk model that factors in compensating controls and reachability, not just CVSS and exploit prediction.
  3. A remediation model that closes paths, not tickets, with validated closure as a built-in step.

This isn't a rebrand. It changes what gets measured, who owns the work, what audit language has to evolve to, and which tools are even relevant.

What this looks like in practice

For security leaders trying to operationalize this without throwing out three quarters of their stack, the practical shifts are:

Move the unit of work from "finding" to "exposure path." A finding tells you what's installed. An exposure path tells you whether anyone can actually get there. Most of the noise in the current model comes from treating the first as if it were the second.

Make business context structural, not editorial. "Crown jewel" can't be a tag a tired analyst applies in a quarterly review. It has to be a property of the asset graph that the risk model reads automatically. If your prioritization rests on whether a human remembered to label the right database, you're not doing exposure management; you're doing exercise.

Route remediation by ownership, not by category. The longstanding mistake of vuln management was dumping prioritized lists on "the patching team." Different exposures have different owners — the platform team, the SaaS admin, the IAM lead, the developer of a specific microservice. Route the case, with the fix, to the actual owner, in the system they already work in. Validate the closure.

Include what you can't patch. Gartner's CTEM framing — and the broader analyst consensus around exposure management — recognizes that a meaningful share of organizational risk now lives in SaaS configurations, identity sprawl, excessive permissions, and architectural drift, none of which have CVE numbers. A program that can't reason about those exposures isn't a security program; it's a CVE program.

Replace activity KPIs with outcome KPIs. "Time to acknowledge a finding," "tickets within SLA," and "scan coverage" measure the exercise. "Mean time to close a path from an internet-exposed asset to a crown jewel," "number of crown jewels with zero reachable paths," and "percentage of identity-graph closure" measure the outcome. The first set is what compliance auditors will keep asking for. The second is what tells you whether you are actually safer this quarter than last.

Why now — and where JupiterOne fits

JupiterOne has spent years building toward this. The cyber asset graph wasn't a product idea; it was a bet that exposure is a relationship problem, not a list problem. JupiterOne Unified Vulnerability Management (UVM) is what the graph was built for.

UVM is our first generally available implementation of exposure engineering principles on top of the J1 graph:

  • Findings collapse into routed remediation cases. By grouping by common-fix (CPE-based) and routing by ownership rather than category, customers consolidate tens of thousands of correlated findings into a few hundred actionable cases, each carrying the path that makes it critical and the fix that closes it.
  • "Critical" is a defensible business statement, not a CVSS number. UVM's Blast Radius Risk Factor uses the graph to detect when a vulnerable asset sits on a path to a customer-defined crown jewel. The output isn't "this CVE scored 9.8." It is "this internet-exposed instance has read access to your production billing database, here is the path, here is the fix." That is something a CISO can defend to a board.
  • Bidirectional ITSM closure. Cases route into the ticketing system the owner already works in, and closure validates back into the graph — the path is re-checked, not just the ticket marked done.
  • Application-level coverage extends the model up the stack. Software CPE resolution, transitive dependency handling, and a unified application entity bring the same path-based model into AppSec, where a substantial portion of effective risk lives in transitive dependencies that today's tooling can't see clearly.

This isn't CTEM as a new tab on a scanner. It's the graph doing the work the industry has spent twenty years trying to fake with denormalized lists.

The "why now" is straightforward. Mythos is the forcing function for every security leader who has spent the last few years suspecting their VM program is treadmill, not progress. The graph architecture exists. The remediation routing exists. The roadmap up the stack to applications and compensating controls is funded and scheduled. Customers don't need to bet on a vision. They need a way to migrate off a treadmill that no longer scales.

What to do this quarter

Three things, in order of how much courage they require:

1. Audit your KPIs and find the ratio. How many of the metrics on your VM dashboard measure activity (scan coverage, tickets-within-SLA, time-to-acknowledge, scanner uptime) versus outcome (paths closed, crown jewels with zero reachable paths, internet-exposed assets eliminated)? If the ratio is heavier than 4:1 toward activity, your program is optimized for the exercise.

2. Audit your compliance language. Which of your audit-driven controls specifies an activity vs. an outcome? You won't be able to change them all tomorrow, but you can start translating internally — and in the next renewal cycle you can push your auditors and assessors toward outcome-based attestation language. The frameworks will follow CISOs who already speak this way, not the other way around.

3. Pilot the path view on one critical application. Pick a service whose compromise would actually matter to your business. Map its exposure paths. Compare the resulting "critical" list to whatever your scanners currently flag as critical for that application. Note the overlap, note the gap. The gap is your program's blind spot, and it is almost certainly larger than you expect.

The next twenty-four months

The industry is about to sort itself into two groups.

The first will use AI to operate the existing exercise faster — better triage, AI-summarized tickets, automated ITSM routing, faster RBVM. They will run out of clock. The math no longer works at the volumes AI-assisted discovery produces, and no amount of speed on the wrong unit of work changes that.

The second will use the forcing function to rebuild around the question we should have been answering all along: is this asset actually exploitable, and if so, what closes the path? These will be smaller programs, with fewer dashboards, scoring fewer activities and more outcomes. They will look unfamiliar to their auditors at first. They will start moving the only metric that matters.

The vulnerability management industrial complex was a thirty-year detour. We're back at the question we started with. JupiterOne built UVM because the original question was always the right one — and now there is a way to answer it.

Chad Richts
Chad Richts

Building simple cybersecurity

Keep Reading

AI Agents Have Keys to the Kingdom | JupiterOne
May 13, 2026
Blog
Your AI Agents Have Keys to the Kingdom. Do You Know Which Ones?

AI agents authenticate as service accounts but reason like employees — and most security teams can't see the difference. Here's why JupiterOne built AI Attack Surface

Meet the New JupiterOne: AI ASM + UVM Launch | JupiterOne
May 13, 2026
Blog
SAY HI TO THE NEW JUPITERONE AND OUR NEW PRODUCTS

Today we're launching the new JupiterOne — a refreshed AI Risk Management Platform plus two products our customers asked us to build: AI Attack Surface Management and

JupiterOne Blog | Project Glasswing Proves That "Just Patch the Criticals" Is Dead. Here's What Comes Next.
April 8, 2026
Blog
Project Glasswing Proves That "Just Patch the Criticals" Is Dead. Here's What Comes Next.

Anthropic's Project Glasswing has shown that AI can now chain together vulnerabilities and exploit software faster than almost any human. That changes everything.

15 Mar 2022
Blog
One line headline, one line headline

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud eiut.

15 Mar 2022
Blog
One line headline, one line headline

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud eiut.

15 Mar 2022
Blog
One line headline, one line headline

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud eiut.

{ "@context": "https://schema.org", "@type": "FAQPage", "mainEntity": [ { "@type": "Question", "name": "What is the vulnerability management industrial complex?", "acceptedAnswer": { "@type": "Answer", "text": "The vulnerability management industrial complex is the network of scanners, scoring systems, tickets, dashboards, frameworks, and compliance requirements that have grown around vulnerability management over thirty years. Each layer was a fix for the one below, and none of them moved the original metric: whether an asset is actually exploitable and reachable from somewhere an attacker can stand." } }, { "@type": "Question", "name": "How long does it take to remediate a vulnerability today?", "acceptedAnswer": { "@type": "Answer", "text": "The average time to remediate a software vulnerability rose from 171 days in 2020 to 252 days in 2025 — a 47% increase in five years, and 327% longer than when Veracode published its first State of Software Security report fifteen years ago. Meanwhile, mean time to exploit has fallen from 2.3 years in 2018 to roughly 9 hours in 2026." } }, { "@type": "Question", "name": "What is exposure engineering?", "acceptedAnswer": { "@type": "Answer", "text": "Exposure engineering is a discipline that designs and operates systems with non-exploitability as a first-class property. The unit of work is an asset's exposure path, not a finding. The unit of success is a closed path, not a closed ticket. Security, infrastructure, and application engineering own the same metric: no asset that matters has a viable path from an attacker-reachable position." } }, { "@type": "Question", "name": "How is exposure engineering different from VulnOps?", "acceptedAnswer": { "@type": "Answer", "text": "VulnOps accelerates the existing vulnerability management exercise — faster triage, more automation, more dashboards. Exposure engineering changes what counts as success: it measures closed exposure paths rather than closed tickets, and treats security, infrastructure, and engineering as a single integrated function scored on the same outcome." } }, { "@type": "Question", "name": "What is JupiterOne Unified Vulnerability Management?", "acceptedAnswer": { "@type": "Answer", "text": "JupiterOne UVM is the first generally available implementation of exposure engineering principles on top of the JupiterOne graph. It collapses thousands of correlated findings into actionable remediation cases routed by ownership, scores risk using a Blast Radius Risk Factor that detects when a vulnerable asset sits on a path to a crown jewel, and supports bidirectional ITSM closure where the path is re-verified — not just the ticket marked done." } }, { "@type": "Question", "name": "What is Risk-Based Vulnerability Management (RBVM)?", "acceptedAnswer": { "@type": "Answer", "text": "RBVM is the prioritization framework that emerged in the mid-2010s, using threat intelligence and exploit prediction to rank which vulnerabilities to remediate first. It improved triage but did not change the underlying unit of work: a perfectly prioritized list of findings is still a list of findings, not an analysis of whether those vulnerabilities are actually reachable or exploitable in context." } }, { "@type": "Question", "name": "What did Anthropic's Mythos model reveal about vulnerability management?", "acceptedAnswer": { "@type": "Answer", "text": "Anthropic's frontier security model Mythos autonomously discovered thousands of unknown vulnerabilities during roughly a month of internal testing, including kernel-level bugs that survived 17 and 27 years of human review, fuzzing, and audit. Mythos eliminates the bottleneck that made traditional vulnerability management mathematically survivable — defenders can no longer process findings at the rate AI-assisted discovery produces them." } } ] }