Why Cybersecurity Measurement Fails Governance Decisions
a case for measurable cybersecurity
Cybersecurity measurement fails in a fairly predictable and repeatable way, and most of us in the industry have seen it often enough that it now feels routine.
Our organizations invest heavily in security controls, tooling, and programs, maybe not as much as we would like, but the notion still holds true. We track coverage across our environments, maturity across all domains, and the activity across business functions. All this so that we can produce regular reports, satisfy audit requirements, and demonstrate alignment(for good or worse) to established frameworks. On paper, our programs appear disciplined and increasingly mature. The important point is this: the failure becomes “more” visible when we and our peers try to use the information to make an actual decision. That is the goal, right?
At that point, the information that we have provided rarely reduces uncertainty in a way that supports real action. Leadership cannot clearly determine how risk has changed, which exposures remain material, or which tradeoffs follow from the data being presented. The discussion shifts toward reassurance rather than resolution, and decisions proceed based on “judgment” rather than “evidence”.
This outcome continues to persist even in organizations with experienced teams, a huge amount of telemetry, and strong external validation. And yet, the same failure is constantly/consistently found. So knowing this, we can say that this rules out isolated causes and more than likely points to a structural one. The underlying issue sits in measurement design. Hard stop.
Most cybersecurity measurement efforts evolve from what is easy to observe rather than from what leadership needs to decide. Metrics describe activity(most of all), completeness, or progress within the security function, but they do not explicitly model how those observations change the likelihood or impact of outcomes that our business cares about. As a result, measurement artifacts accumulate without forming a coherent decision system. Their function becomes presentation rather than decision support. Metric theater.
Measurement has a single purpose in an executive context. It exists solely to reduce uncertainty so that a decision can be made with greater confidence than would otherwise be possible. That is it.
When measurement does not alter a decision, narrow the set of plausible options, or materially change confidence in an outcome, it has failed its purpose regardless of how polished or compliant it appears. That is a problem we needed to fix yesterday.
the decision failure
Let’s consider a routine governance decision that most boards now face:
The board must determine whether to accept, reduce, transfer, or avoid a specific category of cyber risk. Most of us have been through this discussion many times. It influences our capital allocation, our insurance coverage, our contractual commitments, and even our operational constraints. In preparation, the organization points to sustained investment in cybersecurity. Spending has increased year-over-year(yay!), and the security program reports improved maturity(maybe), broader control coverage(not likely), faster response times(probably), and fewer audit findings(i’ll leave this one alone). From an operational standpoint, our program appears to be improving.
During the discussion, a director asks a straightforward question that aligns directly with the decision at hand:
How did this year’s security investments change our exposure to a material cyber event?
I respond with information that is accurate and defensible.
New controls were implemented.
Known gaps were closed.
Programs matured.
Metrics improved across several dimensions.
Each statement that I gave reflects real work and real progress by the entire team. However my response does not support the decision the board is trying to make, and that bothers me every time I see it happen, including when I am the one presenting. Sigh.
It does not quantify how our uncertainty changed. It also does not describe whether the likelihood or impact of a relevant loss events moved in a meaningful way. It also does not distinguish which investments mattered more than others. Most importantly, it does not support a clear choice among accepting, reducing, transferring, or avoiding the risk that we had under discussion.
At this point, the board still has to act. So now what?
When measurement does not support the decision, boards follow a very predictable path. I know you have seen this. They defer the decision pending additional analysis. They rely on qualitative judgment and professional intuition. They substitute external opinion through consultants, insurers, or auditors—yes, the odd notion that external input is of higher value than internal input—I don’t understand that either.
Each of these responses allows the governance to proceed, but none of them truly reflected a measurement system that performed its intended function: that a properly functioning measurement system would have reduced uncertainty enough to make one option more defensible than the others. The absence of that outcome is the failure being examined here.
what measurement actually requires
Measurement has never required complete information(I admit that i’ve made this same mistake so many times). Finance, operations, and the risk functions regularly rely on partial data that holds up statistically and supports real decisions. It requires enough information to reduce uncertainty for a defined decision.
In our cybersecurity executive context, measurement earns its value only when it changes how a decision is evaluated. The question is/should never be whether a metric is precise in isolation. The question is whether it meaningfully narrows the range of plausible or realistic outcomes the decision makers are weighing.
Other disciplines internalized this principle long ago. I provide a case in point:
Finance measures variance and exposure because capital allocation decisions depend on understanding the downside, volatility, and sensitivity. Operations obviously measures throughput, capacity, and operational constraints because those measures directly inform tradeoffs between cost, speed, and reliability. Our good friends in Safety measure near-miss incident rates and leading indicators because regulatory thresholds and liability decisions depend on them.
In each case, measurement exists within a clearly defined decision context. Metrics are selected because they influence a choice that must be made, not because they are easy to collect or even widely accepted.
Unfortunately, cybersecurity measurement rarely follows this pattern.
Most cybersecurity metrics describe internal activity within the security function rather than external impact on the organization. I encourage you to prove me wrong. They report counts, percentages, maturity scores, and coverage levels. These measures provide visibility into effort and progress, but they do not specify how observed values affect the likelihood or magnitude of loss events that matter to the business. I am looking directly at you, Microsoft Secure Score! What a bunch of BS.
When a metric does not state how a change in value alters risk exposure, it cannot reduce uncertainty for a decision maker. It is really just THAT simple. It can however inform a status discussion, but it cannot support a choice among possible alternatives.
And knowing all of this, the result is information that is descriptive but not decision-grade or decision-inducing. It explains what the security organization is doing. It DOES NOT explain what leadership should do differently as a consequence.
what this leaves unanswered
Until we answer that question precisely, improvements in our reporting will not translate into better decisions. I feel very strongly about this. We will keep refining dashboards, expanding metrics(because more metrics must mean better understanding—right?), and meeting external expectations, while governance discussions continue to rely on judgment rather than evidence. In my view, this disconnect is a core reason we continue to see posture failures that ultimately lead to higher impact breaches and incidents.
In the next post, I will focus on the question that I keep coming back to when I am in front of a board trying to make or defend a decision: what cybersecurity measurement is actually responsible for delivering in that moment to help reduce uncertainty in a way governance can act on.


