Caroline Wong and I recently had a call to discuss something that's been bothering me for years ... how do we assess financial risk before a data breach happens. How do you measure the cost of not fixing something? I've been looking for a way to create a risk management metric and attach a dollar value to that assessment, with the focus on Personally Identifiable Information (PII). Caroline made a different argument, that of determining the value of your most critical and vulnerable assets, your systems, instead of assessing risk through PII.
For years I've been concerned about protection of data. But "what about the critical applications that manage that data?", she asked. "What is the business value of the technology that runs your business?" Every dollar spent on security is a dollar not spent on marketing, engineering or product development. That's where "difficulties" arise. There is a misalignment between business outcomes vs justifying the money spent on security.
From this point of view, companies will always allocate a limited budget to security for risk management. "People don't even ask questions about the size of the risk of unaddressed vulnerabilities. For many, it's perceived as a luxury to consider testing all applications and fixing all found vulnerabilities," Caroline explained.
There is a reason the question of evaluating financial risk goes unanswered. It's almost impossible to pre-determine the cost of a data breach.
What is the Point of Cybersecurity?
Security matters when you want to protect something you value. In that sense, applications are the customer experiences that generate the transactions, making it possible for the business to make money and, hopefully, show a profit.
It's pragmatic for organizations to ask, "Where are the areas of highest potential risk or damage to our business? Is there greater business value in protecting the data or in protecting the applications that collect and store the data?"
We must determine the financial risk of both possibilities.
A Simple Equation... that Doesn't Work
In the IBM Cost of Data Breach Report, the cost per record of an average breach is $150 per record. Here's how the math, in theory, plays out. These numbers are directionally accurate, but too broad to be of value.
- 1000 customers x $150 per record = $150,000 ... worth the risk for a large company
- 500,000,000 records (LinkedIn, Facebook) ... not worth the risk
Using this back-of-the napkin approach, the spread between the two extremes makes it difficult to determine the true financial risk. It's an easy formula, but doesn't provide enough insight to make refined business decisions.
Another Way to Evaluate the Cost of Risk
Another scenario for measuring business risk is through evaluating the cost of application downtime. This is relatively easy because downtime equals potential dollars lost.
There is a way to determine at least a threshold of value. Take Colonial Pipeline as an example. Colonial Pipeline determined that the value of their technology was greater than the $5M ransom they paid. This wasn't an arbitrary number. They evaluated the financial value, the amount they were losing, the potential aggregate losses going forward, and determined it was worth $5M to "fix" the problem. The calculation for them was relatively easy: how much will it cost us to stop the bleeding.
Check out these two contradictory headlines:
- Forbes, May 12, 2021 - Colonial Pipeline Reportedly Won't Pay Hacker Ransom.
- New York Times, May 13, 2021 - Colonial Pipeline Paid Roughly $5 Million in Ransom to Hackers.
In the span of 24 hours, Colonial Pipeline was able to determine the threshold value they were willing to pay to get their systems back online. This had nothing to do with personal data, it was the business that was at stake. They chose to protect the business.
Evaluate Risk to your Business Systems
The risk to your business systems continues to exist whether you like it or not. The risk continues to exist whether you measure it or not. The risk continues to exist whether you address it or not. When it comes to your company's systems, can you determine a threshold value you would pay to get your business systems back online?
What application is worth securing for the application's sake? The answer is, whichever is the one that is critical to the business's financial survival. At Cobalt, protecting the Pentest Platform, protecting the application that contains the vulnerability data of Cobalt's customers is critical for Cobalt to remain in business. Protecting their platform, their reason for being in business, is much more valued than an application that is being used to gather event registrants for a virtual wine tasting.
Risk Management Objectives
This is where Risk Management Objectives (RMO) provide value. RMO is intended to define a common goal, or common ground, between the cybersecurity and risk management leader, and the company business or technology executive. Common ground can be as simple as "breaches are bad / security is good", or something as complex as the agreement between multiple stakeholders to define a starting point for prioritizing risk remediation.
Examples of Risk Management Objectives
Reduce the probability that adversaries can cause critical applications to stop functioning. If you're PayPal, you don't want people to not be able to send money to each other. For Zoom, interruption of video call service would put them out of business. Whatever your critical application is, if there is a vulnerability in that application which, if exploited, could cause that application to stop functioning, you can place a financial value on it. That's a business critical exploit, not a data exposure exploit.
Uptime and availability are critical business metrics because income is affected if specific goals aren't met. eBay knows how much money they will lose for every second of downtime. Buyers couldn't buy, sellers couldn't sell, and eBay could not get their cut.
Determine the financial risk of multiple types of attacks, and which will cause the most damage. How do you manage the risk of an unpatchable vulnerability, when, if you were to patch it, it could take your system down? We've had experience with an international bank where the choice came down to: "Are we going to try to fix this with the possibility of bringing this entire thing down, or are we going to continue with what we have." This is the type of decision many organizations have to make.
4 Steps to Evaluate Financial Risk
- Measure value: What is the asset that the vulnerabilities are discovered on, and how important is that asset. How much does it affect the value of the business?
- Evaluate the vulnerability: Is the vulnerability being actively exploited in the wild? How long has the vulnerability been known? Threat intelligence modeling can help with this type of evaluation.
- Know your application attack surface: What are the most likely attack scenarios on your applications? Perform manual penetration testing on all critical assets, especially when you release new features.
- Know what you have: What are the risks associated with each of your cyber assets? How do those risks branch through the various dependencies within the system? With a tool like JupiterOne, you get visibility into your cyber asset relationships, and gain knowledge to help you determine the priority of protecting your most important assets.
Where to Go: Start from Here
The technology infrastructure that runs our modern lives was never designed to be secure. The "system" is never just one thing. It is multiple legacy solutions strapped together with duct tape.
The starting point is to determine your Risk Management Objectives, and agree upon which exploits would cause the most financial damage to your company. Understanding the dependencies between systems and cyber assets is critical to knowing which are your most critical applications and assets.
The problem is, we can't say, if we do "A", it's 80% less likely to happen, or if I do "B", it's 20% more likely to happen. If we could do that, if it was possible to determine the probability, it would be easy. In lieu of that, a security starting point is to focus on reducing the possibility that adversaries can stop critical applications from functioning.
Resources for this Article
- IBM Cost of Data Breach Report
- Colonial Pipeline Reportedly Won't Pay Hacker Ransom
- Colonial Pipeline Paid Roughly $5 Million in Ransom to Hackers
- Cobalt.io: Pentest as a Service
- JupiterOne (free, lifetime account. No cc required.)
About the Authors
Caroline Wong is the Chief Strategy Officer at Cobalt. As CSO, Caroline leads the Security, Community, and People teams at Cobalt. She brings a proven background in communications, cybersecurity, and experience delivering global programs to the role.
Caroline's close and practical information security knowledge stems from her broad experience as a Cigital consultant, a Symantec product manager, and day-to-day leadership roles at eBay and Zynga. She hosts the Humans of InfoSec podcast, teaches cybersecurity courses on LinkedIn Learning and has authored the popular textbook Security Metrics, A Beginner's Guide.
Caroline holds a bachelor's degree in electrical engineering and computer sciences from UC Berkeley and a certificate in finance and accounting from Stanford University Graduate School of Business.
Mark Miller speaks and writes extensively on DevOps and Security, hosting panel discussions on tools and processes within the DevOps Software Supply Chain. He is co-founder of All Day DevOps, the world's largest DevOps conference.
Mark actively participates in the cybersecurity community as producer and co-organizer of tracks at security conferences such as RSA Conference, InfoSec Europe, CD Summit, AppSec USA, and AppSec EU. He is the Senior Storyteller and Senior Director of Community and Content at JupiterOne.
As well, Mark is Executive Producer of the DevSecOps Podcast Series (400K+ listens), and the Executive Editor of the LinkedIn DevOps Group (100K+ members).