by Don Snyder
Research Questions
What is the goal in establishing metrics?
How often should metrics data be collected?
What level of fidelity should metrics possess?
How can one estimate how survivable and effective a mission or weapon system might be in a specific cyber-threat environment given certain system design options, or policy options, or other comparisons?
How can a program's cybersecurity and cyber resiliency be monitored over time?
Is the framework sufficiently comprehensive for ensuring that working-level cyber metrics are covered?
This report presents a framework for the development of metrics—and a method for scoring them—that indicates how well a U.S. Air Force mission or system is expected to perform in a cyber-contested environment. These metrics are developed so as to be suitable for informing acquisition decisions during all stages of weapon systems' life cycles. There are two types of cyber metrics: working-level metrics to counter an adversary's cyber operations and institutional-level metrics to capture any cyber-related organizational deficiencies.
The cyber environment is dynamic and complex, the threat is ubiquitous (in peacetime and wartime, deployed and at home), and no set of underlying "laws of nature" govern the cyber realm. A fruitful approach is to define cyber metrics in the context of a two-player cyber game between Red (the attacking side) and Blue (the side trying to ensure a mission).
The framework helps, in part, to reveal where strengths in one area might partially offset weaknesses in another. Additional discussions focus on how those metrics can be scored in ways that are useful for supporting decisions. The metrics are aimed at supporting program offices and authorizing officials in risk management and in defining requirements, both operational requirements as well as the more detailed requirements for system design used in contracts, the latter often referred to as derived requirements.
Key Findings
A fruitful approach is to define cyber metrics in the context of a two-player cyber game between Red (the attacking side) and Blue (the side trying to ensure a mission).
The framework helps, in part, to reveal where strengths in one area might partially offset weaknesses in another.
No single set of metrics is well suited to all decisionmakers
Technical decisions in development, production, and sustainment are in most need of detailed, quantifiable metrics that tend toward the measures-of-performance end of the spectrum.
Operational decisions require output-oriented performance metrics, typically at a higher level of aggregation than used by the technical community.
Strategic decisions often involve balancing the importance of the mission to service or national priorities with the perceived threat and available resources.
Institutional decisions require measures of the true state of the organization and its processes.
There is a certain level of uncertainty in cyber metrics
There are two kinds of uncertainty relevant to cyber metrics: uncertainty from random variations and uncertainty due to ignorance.
Short of an attack, the most accurate information comes from intelligence and developmental and operational testing.
Cybersecurity and cyber resiliency are exercises in risk management.
Measures are only as good as the measurers
Because cyber monitoring is so often qualitative rather than quantitative, personnel must communicate rather than just report.
Hiring, training, retaining, and keeping current a skilled workforce to execute those measures will be necessary.
Recommendations
Working-level and institutional-level metrics based on maturity levels are useful for supporting decisions.
Air Force leaders will need to instill cultural changes to accept low scores in risk management.
To be used effectively, decisionmakers need to keep in mind that the appropriate uses and limitations of cyber metrics must be realistically assessed and communicated; comparisons and trends should be examined and explained; and implications for the desired end state should be presented understandably.
Decisionmakers need to resist the temptation to press for inappropriate levels of precision and stability for working-level metrics; they must also foster a culture of risk management.
Most senior leaders must delegate decisions to where the locus of information lies. Senior leaders must focus above the technical level, looking for working-level systemic issues and institutional-level deficiencies.
Organizations that successfully avoid catastrophic failures reduce drift by collecting information from all members of the organization, triaging that information, assessing it to create meaning, and channeling key information to senior leaders outside the normal chains of command.
No comments:
Post a Comment