How Today’s Cybersecurity Measurements Cannot Support Decisions
continuing a case for measurable cybersecurity
In the initial post in this series, I argued that cybersecurity measurement fails when it does not reduce uncertainty at the moment a decision must be made. That failure persists even when organizations invest heavily, expand tool coverage, and report continuously on activity and progress. When governance is required to choose among options—accept, reduce, transfer, or avoid risk—the information provided does not materially change how that decision is evaluated. There is no evidence that clearly favors one option over another.
What matters now is not whether this gap exists, but why it persists even after it is recognized. Most organizations see the problem, acknowledge it (often reluctantly), and attempt to close it by refining metrics, adding dashboards, or aligning more closely to frameworks. And yet, the outcome rarely changes.
From a data leadership perspective, this pattern is all too familiar. When decision support fails consistently, the root cause is almost never the metrics themselves. It is the absence of an underlying data architecture designed to support the decision being asked.
The Structural Cause, Reframed as a Data Problem
I would offer that any Chief Data Officer would not describe most cybersecurity measurement environments as immature because they lack data. They would however describe them as immature because the data has no defined analytical purpose.
In a data-driven enterprise, measurement begins with a decision context. Data is treated as an asset whose value depends on its ability to inform a specific choice. Governance starts by identifying the decision, the consumer, and the consequence of uncertainty. Only then are data sources identified, quality expectations defined, and metrics selected. This is the core logic of enterprise data management and analytics maturity.
Cybersecurity measurement does not follow this pattern. Unfortunately.
Security data is generated opportunistically as a byproduct of technology execution. Controls emit signals. Tools generate telemetry. Processes produce artifacts. I would even go so far as to state that it also is a byproduct of security vendor marketing. Let me know your opinion on this one. Measurement is layered on top to summarize what exists, what ran, and what was completed. From a data governance standpoint, this is unmanaged exhaust or noise. It is data without a declared consumer, without an explicit analytical model, and without accountability for decision impact.
There is no formal decision schema that defines how cybersecurity data should be organized to support governance. There is no clear articulation of which uncertainties matter to executive leadership and which do not. As a result, cybersecurity metrics are governed for completeness, consistency, and external expectation rather than for decision relevance. They simply exist because they can be produced.
A data leader would recognize this immediately. It is the same failure mode seen in early enterprise analytics efforts across other domains: abundant data, increasingly sophisticated reporting, and a persistent inability to translate information into action.
Why Metrics Cannot Repair an Architectural Gap
In analytically mature organizations, measurement is derived from an explicit understanding of what must be explained, predicted, or optimized. Data is collected to support that model. Indicators are retained because they reduce uncertainty for a specific decision. Metrics that do not do this are discarded, regardless of how easy they are to collect or how familiar they appear.
Cybersecurity measurement inverts this logic. Data is collected because it exists. Metrics are defined because they are easy to find. Governance is expected to infer meaning after the fact. When that inference fails, the response is to add more metrics rather than to question whether an analytical model of value exists at all.
From a data maturity standpoint, this guarantees failure. Without an explicit model linking security activity to risk outcomes, metrics cannot be evaluated for usefulness. The system rewards accumulation rather than business value.
This is why cybersecurity measurement environments grow wider but not deeper. Dashboards expand. Domains multiply. Confidence inside the function increases—often without justification. But the analytical capability required to reason about exposure, tradeoffs, and residual risk never materializes, because it was never designed.
I have been able to explain what every cyber metric represents and still be unable to explain what the dataset allows governance to conclude. That is not a communication problem. It is what happens when data was never designed to support decisions.
Cybersecurity as a Failed Data Product
Modern data organizations do not think in terms of reports. They think in terms of data products. A data product has a defined consumer, a defined use case, and a measurable contribution to a business outcome. Its value lies in changing a decision or enabling an action.
Viewed through this lens, cybersecurity measurement is not a data product. It has consumers, but no agreed-upon use case. It produces outputs, but no defined decision impact. It generates activity, but no accountable outcome.
From a Chief Data Officer’s perspective, this is the core failure. Cybersecurity—often reinforced by vendor-driven reporting—has built pipelines and mistaken them for analytical products. Organizations invest in data generation and visualization, but not in modeling, hypothesis testing, or uncertainty reduction. Measurement is delivered, but value is assumed rather than demonstrated.
This is why cybersecurity measurement struggles to show business value in the same way other data initiatives do. There is no value proposition defined at the outset. There is no articulation of how improved measurement should change capital allocation, risk acceptance, or operational constraints. Without that linkage, measurement cannot be evaluated as an asset. It can only be maintained as a cost.
Why This Becomes a Governance and Data Leadership Problem
As long as cybersecurity measurement is treated as internal reporting, these shortcomings are survivable. Reporting does not require causal models or quantified uncertainty. It requires consistency and coverage—areas where most CISOs already excel.
BUT boards are accountable for cyber risk outcomes. Regulatory scrutiny increasingly expects evidence that decisions were informed and deliberate. Insurance, audit, and executive oversight converge at the point where risk must be explicitly accepted, reduced, transferred, or avoided.
From a data leadership standpoint, this is precisely the type of environment that demands structured data, explicit models, and disciplined analytics. When those do not exist, governance still has to act. Decisions are made using judgment, external benchmarks, or precedent. The absence of decision-grade data is masked by the presence of abundant reporting.
This creates a fundamental mismatch. Leadership is accountable for outcomes, but the data systems supporting cybersecurity were never designed to inform the choices leadership is now expected to justify. From an enterprise data governance perspective, this is not a cybersecurity failure. It is an organizational failure to treat cyber risk as a governed analytical domain.
The Governance Obligation We Have Not Met
At this point, the unresolved issue is not whether cybersecurity measurement should support governance decisions. That premise is already accepted. The unresolved issue is whether we have built the data architecture necessary for that support to exist at all.
Decision-grade measurement requires more than dashboards and indicators. It requires a system that defines the decision, models uncertainty, encodes assumptions, and translates operational data into evidence that can be evaluated and compared. Without that foundation, metrics will continue to proliferate without producing insight.
In the final post of this series, I will describe what a decision-centered cybersecurity measurement architecture looks like when approached as a data problem first. Not as a framework, and not as a compliance exercise, but as an analytical system designed to support governance decisions under uncertainty.

