By Christopher Paul , Rand Waltzman
The most important lesson from Russia’s involvement in the 2016 presidential election may be this: foreign hackers and propagandists are not afraid to launch attacks against the United States in and through cyberspace that they would not dare risk in a real theater of war. So as cyber aggression gets worse and more brazen every year, it’s crucial that the Department of Defense figures out how to deter foreign actors in cyberspace as effectively as in nuclear and conventional warfare. The Pentagon can take five steps to better deter foreign cyber attacks.
First, the Department of Defense should clarify and narrow the scope of how “cyber” is used and conceived. Right now, the term is applied too inclusively; just because some activity or capability transits the cyberspace domain does not necessarily make it a cyber activity or capability. The department should allow capabilities that traditionally function outside of cyberspace to use their existing authorities for actions through the Internet without insisting that doing so makes them cyber. For example, counterintelligence is a traditional military activity that predates the existence of cyberspace and seeks to protect U.S. military forces from the espionage, sabotage, or other intelligence activities of foreign powers. Counterintelligence personnel have the authority to observe and thwart the activities and communications of foreign spies and saboteurs. Such authorities should clearly extend to foreign activities in and through cyberspace without counterintelligence becoming a cyber activity or requiring special cyber authorities. Once traditionally spatial domain capabilities using the internet as a mode or medium are excused from the cyber tent, then the capabilities left in the tent will be smaller in scope, more manageable, and more amenable to bounding and definition.
After narrowing the scope of cyber, the Department of Defense should take the elements that remain and more carefully and clearly categorize and define them, so that appropriate authorities and approval processes can be matched to distinct capabilities. Under the current model, cyber capabilities are too often viewed as being uniform in their character, quality, and the level of risk associated with their use. Rather than operating under the assumption that anything cyber is similar and high-risk and thus requires the highest levels of approval, the Department of Defense should be able to tease out different categories of capabilities and identify which should continue to require high-level approval and which are more pedestrian, and can have authorities delegated and accelerated. As a hypothetical example, unleashing malicious code to destroy or disable adversary computer networks can and should require approval from the highest levels because of the risk of spread to non-adversarial networks, the risks of threat actors learning from and duplicating the tools used, and concerns about proportionality. Still hypothetically, accessing adversary computer networks using captured or inferred passwords and then exploiting those systems using software that is already part of those networks might require authorities and approvals at a much lower level, as the risks are lower.
...DEPARTMENT OF DEFENSE SHOULD TAKE THE ELEMENTS THAT REMAIN AND MORE CAREFULLY AND CLEARLY CATEGORIZE AND DEFINE THEM...
Second, having narrowed and clarified the scope of cyber, the Department of Defense should make use of cyber for more than just cyber. Effects generated in cyberspace can also have effects in the physical domains, and cyber capabilities could be used to support and enable a wide range of other capabilities and activities (including some of those that we recommended be carved off from being considered part of cyber in our first insight). Make the partnerships; increase the complementarity and integration between cyber and other capabilities that is the hallmark of combined arms. We want to hear all about “Cyber enabled ,” where the blank filled in might be a wide range of things that the Department of Defense does or contributes to: deterrence, military deception, military information support operations, to name a few examples.
Third, the Department of Defense should treat deterrence explicitly as a form of influence. Whether deterring cyber aggression, using cyber-enabled efforts to contribute to broader deterrence, or seeking to deter with no reference to cyber, deterrence is about getting someone (usually a state-level actor, but not always) to do or not do some action or range of actions. That is influence. Effective deterrence and effective influence depend on the perceptions, calculations, preferences, opinions, cognitions, decisions, and will of the potential aggressor. Deterrence is not just about displaying U.S. capabilities or demonstrating U.S. resolve, but is also about how those actions are perceived and interpreted by others. Considered as an influence issue, deterrence could include an enhanced focus on how actions might be perceived and understood. This also suggests that planners should adjust U.S. actions so that they are perceived and understood in a way that contributes to deterrence and that planners consider broader efforts to change the way potential aggressors make their decisions, the range of options they are considering, how they receive and process information, and the content of their cognitions and calculations. Deterrence does not take place in vacuum, but in a culturally and historically nuanced context, and deterrent actions are not the only things that can be changed within that context.
Fourth, Department of Defense conversations about deterrence could consider the relationship between norms and deterrence. The U.S. should demonstrate its resolve by being clear about what should be normatively prohibited in cyberspace, not doing those things itself, and punishing those who chose to do those things. Demonstrations of resolve are essential to deterrence, and become more powerful when clearly connected to norms. When other actors see an aggressor getting noticeably punished for an action or behavior, those other actors might be deterred from similar actions or behaviors. That deterrence is even more likely if framed by and clearly connected to an established set of cyberspace norms.
THE U.S. SHOULD DEMONSTRATE ITS RESOLVE BY BEING CLEAR ABOUT WHAT SHOULD BE NORMATIVELY PROHIBITED IN CYBERSPACE, NOT DOING THOSE THINGS ITSELF, AND PUNISHING THOSE WHO CHOSE TO DO THOSE THINGS.
Finally, planners should increase specificity to improve deterrence, whether deterring against or through cyber, or other means. In other words, the U.S. could be more specific about who it wants to not do what, and what the consequences of doing these things will be. Such specificity is critical in planning and strategizing deterrence. Specificity can also help in the public statements accompanying deterrent posture and actions, though not without risk. Telling a potential aggressor exactly where a “red line” is might be effective in keeping them from crossing that red line, but might inadvertently be an invitation to undertake aggression right up to the boundary of that red line.
These five recommendations were among our key takeaways from Cyber Endeavor 2017, an informative three-day conference hosted by the Carnegie Mellon University Software Engineering Institute and sponsored by the Department of Defense Information Operations Center for Researchand the U.S. Army Reserve 335th Signal Command. We concluded that the U.S. already has an extensive set of tools and capabilities for deterrence through the increasingly critical theater of operations commonly referred to as cyberspace. However, these tools are shrouded in a fog of confusion and doubt that prevents the U.S. from using them to the greatest possible effect. The insights and recommendations we distilled from the conference point the way toward lifting that fog and increasing the accessibility of the tools and capabilities the U.S. already possesses. They also provide the basis for creating a framework for future analysis and capability development.
Christopher Paul is a senior social scientist and Rand Waltzman is a senior information scientist at the nonprofit, nonpartisan RAND Corporation.
No comments:
Post a Comment