4 December 2023

Are Intelligence Failures Still Inevitable?

James J. Wirtz

There is a paradox that accompanies intelligence failure. Drawn from the work of Richard Betts, one of the most influential scholars in the field of intelligence studies, this paradox is based on two propositions. First, there will always be accurate signals in the “pipeline” before a significant failure of intelligence. Second, intelligence failures are inevitable. Combined, these propositions motivate much intellectual activity in the field of intelligence studies: to devise effective ways to use available information and analysis to avoid failures of intelligence, especially those leading to strategic surprise. This article explores how scholars have addressed these propositions to answer the question: Are intelligence failures still inevitable?

I once interviewed for what must be one of the most challenging and consequential jobs in the U.S. federal government—the position of national intelligence officer for warning. When I arrived in the spaces occupied by the National Intelligence Council, I was greeted by a team of four interviewers who fired off a series of rather basic questions about intelligence analysis. Eventually, they asked me a question that I believed would reveal my obvious qualifications for the position: What would you do the first day on the job? I responded that I would draft a letter of resignation, taking responsibility for my failure to warn about what had befallen the nation, especially because I possessed the necessary information to issue a warning that would have made a difference. The interviewers looked a little surprised by this response and asked me when I thought I might have to use that letter. I told them there was no way of knowing, but if I stayed in the job long enough, we would surely find out.

Although I was not hired for the position,Footnote1 it later occurred to me that maybe the interviewers did not recognize the theoretical basis for my answer, which highlights a paradox that seems to accompany intelligence failure. Drawn from the work of Richard K. Betts, one of the most influential scholars in the field of intelligence studies, this paradox is based on two propositions. First, there will always be accurate signals in the “pipeline” before a significant failure of intelligence; that is, analysts will possess the accurate information needed to anticipate what is about to transpire. Second, intelligence failures are inevitable. Combined, these propositions motivate much intellectual activity in the field of intelligence studies: to devise effective ways to use available information and analysis to avoid failures of intelligence, especially those leading to strategic surprise.

While most scholars accept Betts’s two propositions at face value, other intrepid souls have taken them as a challenge. They have championed the opposing view that sometimes surprise occurs because there simply is no information in the intelligence pipeline that could be developed in any realistic sense into a useful warning of impending disaster. Policymakers also seem to believe that the right reforms can in fact solve the problem of intelligence failure; failures will no longer be inevitable if we identify what is wrong with intelligence. Indeed, the origins of both the Central Intelligence Agency and the Office of the Director of National Intelligence are found in efforts to “fix” problems that had led to intelligence failures before the Japanese raid on Pearl Harbor in December 1941 and the September 11, 2001, terror attack against the United States. Others search for some sort of theoretical innovation or insight that might help minimize the impact of the cognitive limitations and organizational pathologies that in the long run make the occurrence of intelligence failure, strategic surprise, and war inevitable. They are especially interested in bridging the gap between intelligence analysts and policymakers because, in the words of Cynthia Grabo, “warning that exists only in the mind of the analyst is useless.”Footnote2

Admittedly, the purview of intelligence studies has expanded since Betts first advanced his propositions. New case studies and ideas are being drawn from beyond the “Anglosphere,”Footnote3 new theories and are being tested,Footnote4 and novel issues are being highlightedFootnote5 as intelligence studies cycles among concerns about intelligence failures, intelligence reform, and intelligence oversight. My intention is not to ignore the vitality of intelligence studies or to engage in “a snippy exercise in competitive scholasticism.”Footnote6 The issues involved in the paradox identified by Betts extend well beyond the ivory tower and continue to pose a challenge to scholars and intelligence professionals as great power competition becomes an increasingly grim reality. The unfolding Information Revolution is also contributing to this debate about the paradox of intelligence failure by offering new approaches to information management and analysis—data analytics, data fusion, artificial intelligence, complexity science—while threatening to drown analysts, policymakers, and citizens alike in a tsunami of mostly irrelevant, misleading, or deliberately deceptive information.Footnote7 The paradox of intelligence failure deserves further consideration.

The remainder of this article explores the progress made in addressing the paradox of intelligence failure by exploring each of its propositions in turn and the arguments advanced by several thoughtful critics that use Betts’s work as a point of departure for their own analysis. Put somewhat differently, are intelligence failures still inevitable?

INFORMATION IS IN THE PIPELINE

Betts began a 1980 article in Political Science Quarterly by rejecting the simplest explanation for why sudden attacks succeed:

Most major wars since 1939 have begun with surprise attacks. Hindsight reveals that the element of surprise in most of these attacks was unwarranted; substantial evidence of an impending strike was available to the victims before the fact. The high incidence of surprise is itself surprising.Footnote8At the time, this was the most definitive statement of an evolving conventional wisdom suggesting that useful information is available to policymakers prior to strategic surprise attacks, which are often commonly referred to as “intelligence failures.” The starting point of this evolution can be traced to the Joint Congressional Committee Investigation into Pearl Harbor, which took place between 15 November 1945 and 23 May 1946.Footnote9 Consisting of 39 volumes and over 25,000 words of testimony, the final volume of the commission’s report provided a history of what officers and policymakers knew in the months leading up to Pearl Harbor and how they responded to the information and directives they received. The culminating message of the report was that policymakers and officers could have anticipated that Oahu was at grave risk by December 1941. The idea that accurate information about what is about to transpire can always be found within the intelligence pipeline is an idea that originated in this penultimate Pearl Harbor investigation. It is a lesson that continues to reverberate in American strategic culture and the field of intelligence studies.

While the congressional committee laid the blame for the disaster suffered at Pearl Harbor on the gross incompetence or outright dereliction of duty on the part of the officers and officials in charge, Roberta Wohlstetter in her 1962 monograph Pearl Harbor Warning and Decision, placed the conclusions presented in the congressional report in a different context. By limiting the use of hindsight in her explanation of how analysts interpreted available information before the attack on Pearl Harbor, Wohlstetter reintegrated the “signals,” accurate information about what is about to unfold, into the background of “noise,” extraneous or misleading information, that can obfuscate emerging events from analysts and policymakers. In other words, the information needed to develop an accurate estimate was available to analysts prior to the attack on Pearl Harbor, but recognizing this accurate information was no simple matter amid the ample background noise of misleading or extraneous data. This might appear completely commonsensical, but given the political and bureaucratic effort to identify, and avoid, responsibility for the disaster on Oahu, contemporary observers deemed Wohlstetter’s insight to be highly noteworthy.Footnote10 It served as a corrective to the conventional wisdom that U.S. analysts should have seen the Japanese attack coming based on available information.Footnote11 Unlike the nincompoops who populate the pages of the report of the congressional committee, the analysts and officers who populate the pages of Wohlstetter’s book struggle to separate the “wheat from the chaff” to develop actionable intelligence. Further complicating matters is the fact that signals might remain trapped as raw intelligence—intercepted encoded communications, human intelligence reports, purloined documents—that never reaches analysts waiting for decoding or analysis until they are overtaken by events. After the war, U.S intelligence analysts decoded part of the backlog of Japanese radio traffic they intercepted in the months leading up to Pearl Harbor—188 of these messages were deemed significant and eight of these could have identified Pearl Harbor as the point of attack through a process of elimination.Footnote12 Signals are available but are difficult to translate into actionable intelligence ex ante. Recent scholarship has even suggested that this signals to noise ratio is getting worse. The data deluge produced by the Information Revolution is overwhelming citizens and analysts alike, making it increasingly difficult to not only issue accurate warnings of threatening events, but to also develop accurate situational awareness.Footnote13

Betts thus identifies this phenomenon as the first component of the paradox of intelligence failure. By wielding Occam’s Razor at the beginning of his analysis, he dismisses the simplest reason for intelligence failure at the outset—that surprise occurs because individuals simply lack information indicating what is about to happen. With this simplest explanation accounted for, it is then possible to explore the Pandora’s Box of psychological, organizational, and political impediments, pathologies, and idiosyncrasies that bedevil analysts and policymakers along each stage of the “intelligence cycle” (setting requirements, collecting data, analyzing data, disseminating analysis to policymakers, reassessing environmental change). In 1989, Betts described how this first element of the intelligence paradox was shaping intelligence studies:

Orthodox pessimism emerges from cases (a dozen or so of note in the past half century), which appear to reveal a prevalence of warning indicators prior to attack. For example, [Barton] Whaley lists eighty-four warnings available to Stalin before Hitler’s invasion, ranging from reconnaissance overflights, through tips from Soviet and Polish spies, to German leaks. While many sources of surprise lie in the attacker’s skill in deception and operational innovation, orthodox studies emphasize the victim’s mistakes—how interactions of organization, psychology, and inherent ambiguity of information create pathologies in the process of absorbing and reacting to indicators. For example, hierarchical and fragmented bureaucracies retard communications and block dissemination and coordination; individuals along the line misunderstand or transmute the implications of messages; ambient “noise” from irrelevant data obscures the significance of revealing signals; or false alarms feed a “cry wolf” syndrome.Footnote14In contrast to many other fields of political science, the assumption that useful information exists prior to some untoward event but is difficult to identify, interpret, and utilize has led to the emergence of an intelligence paradigm as scholars trace out the myriad but related ways intelligence failure occurs.Footnote15

Scholars have recently taken Betts’s observation one step further by noting that not only are signals available in the intelligence pipeline, but accurate and timely analyses that lay out at least some aspects of an impending setback are usually provided to officers and policymakers with no discernable impact on policy or the initiation of preparations to ward off disaster.Footnote16 Sometimes these estimates can be eerily prophetic, albeit somewhat incomplete. The so-called Phoenix memo written by a Federal Bureau of Investigation (FBI) field agent in July 2001, for instance, warned about a large number of individuals from the Middle East attending flight schools in Arizona and that this situation required additional investigation.Footnote17 The Phoenix memo circulated within FBI headquarters, but was never shown to the FBI director, or provided to the White House or transmitted to the Central Intelligence Agency (CIA).Footnote18 The memo was never subjected to analysis or fusion with other information and it failed to lead to any action, although its discovery after the 11 September 2001 (9/11) terror attacks on the World Trade Center and the Pentagon led to recrimination and much regret. Another memo, written by CIA analyst Joseph Hovey in November 1967, warned that the Viet Cong and their North Vietnamese allies were about to launch a major offensive against South Vietnamese cities.Footnote19 Although this memo circulated widely among military officers in Vietnam and intelligence officials and senior members of the Lyndon B. Johnson administration, it too failed to produce much of a response. Indeed, it is probably not uncommon before the occurrence of strategic surprise attacks for “systems to be blinking red” to paraphrase the way George Tenet, the director of the CIA, characterized intelligence reporting in the months leading up to the 9/11 terrorist attacks on the Pentagon and World Trade Center.Footnote20

IS WARNING A CHIMERA?

Several scholars have noted that it is simply wrong to suggest that signals, useful analysis, or even accurate finished intelligence can always be found in the intelligence pipeline following a strategic surprise and associated “failure of intelligence.” Michael I. Handel raised an early objection to this idea, noting that “as a result of the great difficulties differentiating between ‘signals’ and ‘noise’ in strategic warning, both valid and invalid information must be treated on a similar basis. In effect, all that exists is noise, not signals.”Footnote21 Handel seems to be suggesting that the concept of a signal is a chimera. It is a mirage from our past, made apparent only with the aid of hindsight that will not be available when one is trying to anticipate what the opponent might do. As a result, debates about the existence of accurate data in intelligence pipelines can never be resolved because the issue itself is badly framed; the ability to distinguish between signals and noise is only granted by hindsight.

While this view is not without some validity, people have managed to develop highly accurate estimates of what an opponent is likely to do before events unfold and in time to take diplomatic or military action to ward off a surprise attack or some other type of fait accompli. On 25 July 1990, for instance, Charles Allen, the National Intelligence Officer for Warning, issued a “warning of war” memorandum, highlighting that Iraq would soon be capable of launching a corps-sized operation that could occupy much of Kuwait. On 26 July, Allen visited the National Security Council to provide satellite imagery demonstrating the extent of the Iraqi military buildup.Footnote22 Allen succeeded in producing an actionable warning about eight days before the invasion occurred, albeit one that did not prompt the George H.W. Bush administration to take preemptive action to head off Saddam Hussein’s scheme to occupy Kuwait. Allen also succeeded in meeting an important requirement when it comes to warning: his alert reached its intended audience while they still possessed the requisite time needed to take some sort of useful action to head off an undesired event.Footnote23 Allen’s success is not unique. The reporting contained in the Phoenix memo, which was written by FBI Special Agent Kenneth Williams, contained “actionable intelligence” that could have been used to head off the 9/11 disaster.Footnote24 The Hovey memorandum also painted an accurate picture of the looming Tet offensive, two months before the attack materialized at the end of January 1968. Although it is not easy to do, it is possible to separate signals from noise without the aid of hindsight.

Useful Indicators are Sometimes in the “Pipeline”

Another critique is simply that Betts is wrong—sometimes there are no signals, informal warnings, or finished analytical products in the intelligence pipeline prior to a strategic surprise attack that would provide decisionmakers with a useful and timely indication of what is about to transpire. Ariel Levite provides the classic description of this perspective:

The common practice in the exiting surprise literature has been to make use of a rather loose and extremely broad definition of warning, one that essentially incorporates every possible indicator of the perpetrator’s intentions and capabilities, from the most tangible and explicit to the most amorphous and implicit. It thus includes, among other things, raw as well as processed intelligence, rumors, and newspaper stories, and even the “logic of the situation” and “lessons of the past.” Such a definition is so broad and ambiguous that it is highly susceptible to subjective interpretation.Footnote25What Levite is suggesting is that, in hindsight, observers are too willing to identify all sorts of information as accurate signals of what was about to transpire when in fact these signals just share some sort of similarity with subsequent events. By contrast, Levite suggests that true “warning” is rather rare and is characterized by several qualities: it is accurate and timely, drawn from a source of known reliability, and informs the receiver along five critical dimensions of a future event (i.e., who, what, where, when, and why).Footnote26 Levite does not suggest that true warning guarantees intelligence success, but he does believe that “high-quality warning,” which by definition is both accurate and credible, is likely to be acted on effectively by policymakers, thereby avoiding the panoply of intelligence pathologies and idiosyncrasies that lead to surprise attack and intelligence failure.

Erik Dahl also makes a similar point in noting that for signals to be true signals, they must be capable of producing an “actionable” intelligence estimate. And, as Levite suggested, policymakers must see this actionable intelligence as credible before they will act. According to Dahl, “[Intelligence] must provide precise, tactical-level warning, and it must be combined with a high level of receptivity toward that warning on the part of policymakers who will decide how to use it.”Footnote27 That is, signals have to provide some clear indication of who is doing the acting, what is about to happen and where, when, and why it is about to occur if officers and policymakers are to be expected to take action to ward off untoward developments. Dahl seconds David Kahn’s observation, for instance, that it is wrong to assess that there were any signals in the Pearl Harbor record: “[N]ot one intercept, not one datum of intelligence ever said anything about an attack on Pearl Harbor or on any other [U.S.] possession.”Footnote28 For Dahl, signals and analysis that produce a broad “strategic” warning of a deterioration in international conditions, or some sort of commentary suggesting that an opponent might take an unwanted initiative of an indeterminate nature, are useless when it comes to heading off surprise attack and intelligence failure. The reason officers and policymakers find these types of strategic warnings of little value is because they do not help identify “actions” that can be taken to head off disaster. For example, even with the aid of hindsight and the cataloged and organized collection of the evidence available to policymakers before the 9/11 terror attacks, it really is impossible to point to evidence that was available ex ante that would produce actionable intelligence that meets Dahl’s requirement. Al-qaeda was identified as the threat (who), various reasons could be suggested for carrying out the threat (why), and the betting money was that the threat was imminent (when). Nevertheless, without knowing what form the attack would take or where it would occur, policymakers failed to act as the intelligence picture slowly gained fidelity. Eventually, time ran out.Footnote29

Levite and Dahl suggest the fundamental mistake made by Betts, along with most of the field of intelligence studies, for that matter, is that research “selects on the dependent variable,” so to speak. In other words, intelligence studies concentrates on documenting and explaining instances of intelligence failure while expending virtually no effort in exploring instances of intelligence success. In terms of methodological biases, this is a valid point. Nevertheless, the existence of this methodological bias in no way disconfirms Betts’s observation that timely and accurate signals can always be found in the intelligence pipeline. Put somewhat differently, the presence of timely and accurate signals in a successful response to a surprise initiative does not demonstrate the absence of signals in instances of intelligence failure.

Critics might charge that the methodological sleight of hand that just occurred misses the fundamental point—there really is a qualitative difference in the information and analysis available in cases of intelligence failure and success, a difference that permits analysts and policymakers to overcome the myriad impediments and pathologies that lead actors to fall victim to surprise attack. Both Levite and Dahl, for instance, would point to the high-quality signals and analysis produced by the U.S. Navy prior to the June 1942 Battle of Midway, especially in terms of their performance prior to the Pearl Harbor catastrophe, to illustrate this point.Footnote30 Admittedly, the performance turned in by analysts was masterful in the months leading up to the Japanese carrier strike against the American base at Midway. U.S. codebreakers managed to read parts of Japanese naval communications. Analysts managed to recreate the code grid used to denote locations on Japanese naval maps, which helped them further piece together Japanese plans. Additionally, using a stratagem involving the broadcast of news “in the clear” of a purportedly broken desalinization plant on Midway, they were also able to verify that the Midway Island was referred to in Japanese classified communications by the grid coordinate “AF.” Nevertheless, this “Midway standard” sets an incredibly high bar for defining what constitutes “actionable” signals, analysis, and warning. Indeed, Betts noted that the Midway standard comes close to a tautology: warnings that are perfect cannot be disregarded and will not be disregarded. By contrast, most warning situations are like a glass half full, a mix of accurate signals and noise, not a glass completely full, which might be thought of as “a set of indicators so unambiguous that scant room for doubt is left.”Footnote31

Hindsight Affects Our Assessment of Success

Additionally, two reservations can be raised about the apparent clarity, accuracy, and credibility of the signals and analysis available before Midway. First, judgments about signal quality are affected by hindsight bias in discussions of Midway to such an extent that noise rarely rears its ugly head in the histories of this intelligence success, which is often depicted as a sort of detective story whereby analysts follow clues to their inevitable conclusion. Robert Jervis, for instance, has noted how the effort to conduct policy or intelligence postmortems is hijacked at the outset by initial judgments about the success or failure of past efforts. Investigators focus on finding the sources of failure following disasters, while investigators focus on the sources of success in the wake of success; that is, if the effort is made in the first place to understand why matters turned out well.Footnote32 Handel’s observation about the difficulty of discerning signals from noise without the aid of hindsight also applies to instances of intelligence success, despite the fact analysts, policymakers, and officers sometimes manage to get things right. Those accurate and actionable signals and warnings appear convincing to us in hindsight. Second, observers tend to downplay the “sportin’ assumption” inherent in intelligence work—the idea that opponents would use denial and deception to really “make a game of it” by doing everything in their power to hide their intentions from their opponent. In other words, denial and deception are always a possibility and if information appears clear and compelling, there is a chance that the opponent wants this information to appear clear and compelling. Combined, these two tendencies give the signals and analysis produced before Midway an air of accuracy and validity that was not fully appreciated by everyone at the time. Some officers worried that they were falling victim to the greatest communication deception of the “wireless age.” Chief of Staff George C. Marshall told congressional investigators after the war about the doubts that emerged about the intelligence indicating that the Japanese were targeting Midway:

We were very much disturbed because one Japanese unit gave Midway as its post office address, and that seemed a little bit too thick, so when the ships actually appeared it was a great relief, because if we had been deceived, and our limited number of vessels were there [Midway], and the Japanese approached at some other point, they would have had no opposition whatsoever.Footnote33Further suggesting that intelligence analysts were falling for a ruse was the widespread belief that the U.S. position at Midway was unimportant, especially in terms of supporting U.S. air and naval operations against the Japanese. The facilities it contained were rudimentary, it was too small to even house more than a few thousand defenders, and it existed at the outer reaches of a long, and vulnerable, logistics chain. By contrast, Army officers charged with defending Oahu thought that the Japanese were coming to revisit the only important target in the area, Pearl Harbor. After all, moving the Japanese grid matrix south and east by one square would place the coordinates “AF” over Honolulu, broken desalinization plants notwithstanding. Even Admiral Chester Nimitz, the commander who had to act on the available intelligence, hedged his bets. He placed his carrier force in a position where it could defend not only Midway but would also be in a position to defend Pearl Harbor, the most important target in the general vicinity, if it turned out that Naval intelligence had this time fallen victim not to denial but to Japanese deception.

Comparing intelligence outcomes prior to Pearl Harbor and Midway also may be a bit like comparing apples to oranges because Midway occurred during wartime while the run-up to Pearl Harbor occurred in peacetime, albeit at a time when Japanese–American relations had appeared to reach a nadir even to observers at the time.Footnote34 This is important because ceteris paribus, it should be easier during wartime than during a crisis or peacetime to unravel an opponent’s gambit before it is too late. Wartime places matters on an altogether different footing, creating an urgency and attention to detail that is often absent from the more leisurely routines of peacetime. While wartime eliminates some collection methods—U.S. diplomats and military attachés no longer had access to Japanese officials and officers in Tokyo—combat itself can increase information flows as reconnaissance, surveillance, traffic analysis, captured documents, and prisoner interrogation reports can be examined for signals.Footnote35 During war, analysts also have a pretty good idea about who the opponent is, a good appreciation of why hostilities have commenced, and increasing fidelity in their understanding of how future attacks will take place (i.e., with ground, air, land, cyber, or space forces or some combination thereof). What really needs to be determined is where and when the next blow will fall. Moreover, in the specific case of Midway, the Japanese made the mistake of returning to the scene of the crime, so to speak. In other words, after Pearl Harbor, it was less likely that a surprise carrier raid would succeed against the Hawaiian Islands—the idea had lost some of its novelty. By May 1942, the general strategic setting in the Pacific clearly pointed to Oahu and the outpost of Midway, as both the forward defensive line and the launching point for U.S maritime operations against Japan.

Wartime also reduces the myriad disincentives officers and policymakers face when it comes to reacting to information of an impending enemy gambit. Responding to indications of an impending attack in peacetime could involve launching a preemptive attack, which implies initiating a war that one probably hoped to avoid while taking on the political onus for initiating hostilities. In other words, indications of strategic surprise attack at the outset of war are doubly shocking. Not only do they point to the possibility of attack, but they also point to the impending total and complete failures of existing national defense strategies, especially the failure of strategies of deterrence.Footnote36 According to Betts:

The main difference between prewar and intrawar decisions about responding to danger is that after the enemy has already ignited the conflict the defender no longer faces uncertainty about whether he might still avoid or postpone war by choosing accommodation, reassurance, or diplomatic stalling rather than military counteraction. If the enemy has not irrevocably decided to strike, potential costs to responding to warning with mobilization are economic (unnecessary expense), domestic political (unpopularity of war scares or disruption of jobs and families), diplomatic (intensification of tension), or strategic (possible provocation of an undecided by nervous enemy). None of these potential costs looms as large once the contenders have begun killing each other.Footnote37The reduced cost of response has the net effect of lowering the overall quality and credibility needed to prompt policymakers to act. Dahl recognizes this “apples and oranges” problem, especially when he demonstrates how actionable intelligence often succeeds in thwarting terror plots.Footnote38 When it comes to preempting terrorists, officials risk little more than bad publicity or possibly compromising sources and methods if they happen to inadvertently arrest innocent people. More is at risk when it comes to international threats posed by competing powers.

The Pearl Harbor–Midway comparison also is a bit idiosyncratic because it is so extreme. The performance turned in by American officers and analysts in peacetime before Pearl Harbor was abysmal, while the wartime performance turned in by virtually these same officers before the Battle of Midway was outstanding. For instance, when one reads firsthand accounts of intelligence matters on the eve of Pearl Harbor, it was never clear to the participants who was reporting what information to whom, or who actually had the most sensitive intelligence, or exactly who was supposed to act on information from trusted, credible sources. By contrast, the “chain of custody” of the signals, analysis, and response prior to Midway was clear to the participants and was subjected to effective assessment at all stages of the intelligence process. A focus on the Pearl Harbor–Midway comparison also obfuscates the fact that others have provided timely and accurate warnings of what was about to happen that nearly met Dahl’s requirement of actionable intelligence (the Hovey memorandum) or fully met the requirement (Allen’s warning of an Iraqi invasion) or might have prompted some sort of useful action (Phoenix memo). After all, the intelligence systems prior to the 9/11 terror attacks and Pearl Harbor were both “blinking red.”

Missing the Real Insight

Although demonstrating that credible indicators were available to U.S. commanders in the run-up to the Japanese attempted invasion of Midway does not demonstrate that information of similarly high quality was unavailable prior to instances of intelligence failure and surprise attack, Levite and Dahl’s work does offer crucial insights into two related factors that enable intelligence success. These factors are obvious but are generally overlooked by the intelligence studies literature because of its focus on the dependent variable of intelligence failure. First, accurate information, analysis, warnings, and various types of finished intelligence will have no discernible impact if they fail to reach decisionmakers in time for them to act on that intelligence. Timely warning, actionable or otherwise, must flow to decisionmakers for a response to occur. Second, decisionmakers who receive warning must realize that they are responsible for acting in response to that warning.

The presence of these two conditions is not sufficient to produce intelligence success or the avoidance of strategic surprise, but their absence is sufficient to produce intelligence failure. By contrast, these conditions are necessary for intelligence success. If policymakers fail to receive warning or fail to realize that they are in fact responsible for leading a response, “intelligence failure” will occur regardless of the accuracy of the information held by analysts or the quality and credibility of their estimates. Ironically, both Levite and Dahl’s work suggests that it is variability in the reception and receptivity to warning, not the presence or absence of superior information, that helps to distinguish intelligence failure from intelligence success.

INTELLIGENCE FAILURES ARE INEVITABLE

In “Analysis, War, and Decision: Why Intelligence Failures are Inevitable,” which was published in the journal World Politics in 1978, Betts laid out the second proposition of the paradox of intelligence failure, or actually a series of propositions, that prompted this gloomy assessment.Footnote39 In offering this observation, Betts was not suggesting that every effort to offer an accurate and effective forecast was doomed to failure—intelligence success was quite possible, especially because the information needed to construct that accurate forecast was somewhere in the intelligence pipeline. Instead, he was suggesting that efforts to “fix” analysis or organizations in the aftermath of some perceived intelligence failure would eventually come up short because it is impossible to anticipate the nature of future challenges. According to Betts, “the lessons of hindsight do not guarantee improvement in foresight, and hypothetical solutions to failure only occasionally produce improvement in practice.”Footnote40 Actions intended to fix some problem or bolster a specific capability can have a counterproductive impact if future developments create different problems or require capabilities that remain underdeveloped because resources were devoted to address issues that led to some previous intelligence calamity. According to Betts:

The roots of failure lie in unresolvable trade-offs and dilemmas. Curing some pathologies with organizational reforms often creates new pathologies or resurrects old ones; perfecting intelligence production does not necessarily lead to perfecting intelligence consumption; making warning systems more sensitive reduces the risk of surprise but increases the number of false alarms, which in turn reduces sensitivity; the principles of optimal analytical procedure are in many respects incompatible with the imperatives of the decision process; avoiding intelligence failure requires the elimination of strategic preconceptions, but leaders cannot operate purposefully without some preconceptions. In devising measures to improve the intelligence process, policymakers are damned if they do and damned if they don’t.Footnote41Given the inherent tradeoffs in both organizational and procedural reforms, it is only a happy coincidence when a response to a failure of intelligence manages to address future challenges or solve problems without laying the seeds of new ones.

A recent, and rather extreme, example of this phenomenon occurred when the key findings of the 2007 National Intelligence Estimate, Iran’s Nuclear Intentions and Capabilities, a 150-page report containing 1,500 footnotes on the scope and status of Iran’s clandestine effort to defy the international community and to ignore its Non-Proliferation Treaty obligations, were made public. Participants in the crafting of this formal intelligence product have explained how they went to great lengths to correct past weaknesses in tradecraft centered on assessing the credibility and accuracy of data and in conveying levels of certainty about judgments to intelligence consumers.Footnote42 These problems were often identified as contributing to the incorrect assessment that was reflected in the title of the October 2002 National Intelligence Estimate, Iraq’s Continuing Program for Weapons of Mass Destruction.Footnote43 With the availability of new information, it is increasingly apparent that the painstaking efforts made by analysts in building the 2007 National Intelligence Estimate succeeded in correcting these problems. They produced a nuanced assessment of complex issues surrounding Iran’s clandestine nuclear activities, especially in terms of their judgment that Iran had suspended its nuclear weapons program. News of this key finding, however, prompted an eruption of partisan political acrimony as supporters of sanctions against Iran cried “politicization,” while those seeking a more accommodative approach congratulated analysts for ending their practice of pandering to power. In other words, getting it right and getting it wrong led to the same outcome—political acrimony, charges of politicization, and calls for additional intelligence reform.

Betts also notes that a reoccurring pattern emerges in the assessment of intelligence failures that prompts a reoccurring pattern in associated recommendations for reform:

The most frequently noted sources of breakdown in intelligence lie in the process of amassing data, communicating them to decision makers, and impressing the latter with the validity or relevance of the information.

… For this reason, official post mortems of intelligence blunders inevitably produce recommendations for reorganization and changes in operating norms.Footnote44There is always a chance that reorganization can correct deficits in reporting channels and responsibilities, increase the ability of managers to monitor individual performance and output, and better connect the elements of a workforce that need to be better connected. Nevertheless, savvy bureaucrats often consider “reorganization” to be akin to rearranging the deck chairs on the Titanic, a knee-jerk reaction to an issue that can be easily executed by leaders who are desperate to act even though they neither understand the problems they are facing nor have any constructive ideas about how to solve them. By contrast, intelligence reform rarely focuses on improvements in intelligence tradecraft, or empowering analysts, or increasing the flow of information or debate about ongoing estimates within and across organizations. As Douglas MacEachin has observed, this focus on organizational solutions at the expense of improved analysis can create a situation whereby old problems are simply sent “to live in new residences.”Footnote45

FIXING INTELLIGENCE

Although the logic behind Betts’s second proposition is compelling, it has not stopped scholars from driving deeper into analysis, warning, and the way officers and officials respond to warning to see if some inroads can be made into the “inevitably problem.” Unlike the critique of the first proposition, which is based largely on disagreements about metrics for assessing the clarity and quality of indicators, analysis, and warning, those who address the second proposition broaden the empirical and theoretical aperture of inquiry by focusing on the intelligence–policy nexus and the logic of the international setting and the response to warning to reassess why specific intelligence failures occur.Footnote46 Those who address the second proposition do not reject it out of hand; instead, they suggest that it just might be possible to reduce meaningfully the likelihood of strategic surprise attacks and associated failures of intelligence. While these scholars admit that the availability of signals and the quality of analysis and warning available prior to an attack or incident can vary, they also assume, like Betts, that signals will be present in the intelligence pipeline prior to surprise and intelligence failure. These scholars focus on explaining why it is so hard to get policymakers to respond to warning.Footnote47

The Intelligence–Policy Nexus: Reassessing the Human Factor

One approach, adopted by Uri Bar-Joseph and Rose McDermott, is to depart from rather “antiseptic” explanations of bureaucratic and personal behavior that animates much of intelligence studies. Bureaucrats in this traditional approach act in a way depicted by the second wave of the bureaucratic politics literature, epitomized by Graham Allison’s interpretation of organizational behavior during the Cuban Missile Crisis, Essence of Decision.Footnote48 Organizational affiliation influences the facets of an issue that draws bureaucrats’ attention, members of an agency tend to take on their organization’s interests as their own (e.g., what’s good for General Motors is good for America), bureaucrats defend their organizational domain from would-be competitors, and existing standard operating procedures dominate organizational responses. Although these tendencies and biases can produce suboptimal outcomes, they are depicted as a somewhat rational response to an organizational setting. By contrast, Bar-Joseph and McDermott adopt an approach more reminiscent of the first wave of the bureaucratic politics literature, which has experienced a revival of sorts with the publication of a series of interviews with members of the Harry S. Truman administration about the “debate” surrounding the decision to build the hydrogen bomb.Footnote49 In this approach, organizational conflict is visceral, petty, and nasty, as individuals settle old scores and pursue pet projects and other personal interests. Bureaucratic infighting becomes a proxy for the real issues at stake, stemming from wounded amour-propre, while sheer stupidity, personality disorders, and other physical ailments rear their ugly heads.Footnote50 Individuals’ preferences and personalities influence outcomes, especially their response to information that threatens their organizational and personal priorities.

As Bar-Joseph and McDermott note, in the case of Operation Barbarossa, Chinese intervention in the Korean War, and the 1973 Yom Kippur War, Soviet, U.S., and Israeli forces, respectively, suffered devastating surprise attacks despite the fact that senior officers, officials, and analysts received scores of increasingly ominous warnings, informal estimates, and finished intelligence products indicating what was about to unfold.Footnote51 Instead of having a logical impact on policy, these indicators were blocked, ignored, or explained away by some analysts and leaders, producing disaster. The source of this situation is found in individuals’ need for cognitive closure (maintaining certainty about policies selected and the likely course of future events), paranoia, and narcissism, which fuels a confirmation bias that prevents them from integrating new, discrepant information into their existing expectations and theories about the future. Before Operation Barbarossa, for example, the conspiratorially minded Joseph Stalin surrounded himself with sycophants whose modus operandi was “sniff out, suck up, survive.” Stalin suspected that provocateurs were behind increasingly dire warnings of impending attack. Under these circumstances, only the bravest officers and government officials would bring Stalin unpleasant information about Nazi preparations to invade the Soviet Union, and the “boss” generally responded to these indications and warnings with an order to shoot the reporting official as a traitor.

Things were not quite as draconian in General Douglas MacArthur’s Far East Command as United Nations (UN) Forces marched toward the Yalu, but MacArthur too seemed highly suspicious of the motives of others across the U.S. government. MacArthur was surrounded by a devoted staff who wanted nothing more than to enable his strategic ambitions to achieve a quick and decisive victory on the Korean Peninsula. As a result, officers and analysts who should have transmitted warnings sugarcoated reporting that indicated that the People’s Liberation Army had moved across the Yalu in force and would ambush UN troops as they continued north. MacArthur and his staff failed to act on this intelligence. As a result, U.S. units were forced to undertake the longest retreat in U.S. military history.

In the case of the Yom Kippur War, the locus of failure rests more squarely with Israeli intelligence, especially with Elia Zeira, director of Israeli Military Intelligence. Zeira believed in the “concept,” the idea that without air superiority, Egypt would not attack Israel. Zeira believed so strongly in the concept that he felt no need to activate a clandestine penetration of the Egyptian telephone network, which allowed the Israelis to listen to discussions among Egyptian officials and officers. Despite overwhelming evidence to the contrary, Zeira did not think that the time had arrived to risk detection of the phone taps by activating them. Instead of telling Israeli officials of his decision, he simply told them that the system had detected nothing. After the war, Israeli officials were shocked to hear from the investigating Argranat Commission that the reason why the phone taps detected no indication of attack was because Zeira had never activated the system.

Personality disorders, a counterproductive “command climate,” and even stupidity can stymie an effective response to warning despite the presence of high-quality signals and analysis. As Bar-Joseph and McDermott note, “A warning that may convince one decision-maker that a threat is imminent might leave another still disbelieving. In this sense, the personalities of the decision-makers and their belief systems are as critical to the final outcome as the quality of warning.”Footnote52 Indeed, the real “miracle at Midway” might not have been the availability of accurate signals and quality intelligence, but the presence of a high-functioning staff and a commander who managed to act effectively on available warning, which happens to be the most important lesson revealed when Dahl and Levite “process traced” success. The insights provided by Bar-Joseph and McDermott are thus important—it is wrong for the field of intelligence studies to assume that analysts and policymakers share at least a modicum of intelligence, professional competency, and do not suffer from some mental pathology. Nevertheless, their work does make a convincing case that more items should be added to the Pandora’s Box of problems that make intelligence failures inevitable.

The Logic of the Situation

A different avenue taken by scholars is to highlight the international setting that is likely to foster incentives for launching surprise attacks that can lead to failures of intelligence. In other words, by focusing mostly on the causes of intelligence failure, theorists had overlooked the dialectical international setting that produced incentives and perceptions that led one party to utilize surprise, while leading another party to fall victim to surprise.Footnote53 Alerting analysts and policymakers to when they might become vulnerable to the gambits launched by weaker adversaries could thus provide a “diffuse” indicator that an emerging setting is increasing the possibility that surprise might occur. Moreover, within the analytical process and the intelligence–policy nexus, there are also signs that untoward and unanticipated events might be imminent—indicators that often emerge when the system “is blinking red.” Persistent discussions about the same unusual activities, which drift around among different observers housed in separate offices or agencies, is one sign that analysts are encountering developments that are not conforming to dominant schema. Rumors that the opponent is contemplating fantastic or “harebrained” schemes and the emergence of “dissenters” who champion disturbing views or hold contrarian opinions about the “consensus view” are all indications that it is time for analysts, intelligence managers, and policymakers to take matters especially seriously. Knowledge of these situations and indicators might serve to “inoculate” analysts and policymakers against surprise; they will be alerted to the fact that when these indicators are present, trouble is likely to follow and the time to act has arrived.

Admittedly, the presence of feelings of unease, rumors of bizarre schemes, and the presence of vocal malcontents who are critical of accepted wisdom are a weak basis for action, especially when one considers the exacting standards for warning set by Levite and Dahl. Further compounding this weakness is the response dilemma inherent in reacting to warnings. There are always doubts about even compelling signals ex ante—analysts can never be 100% certain that they know who is doing the acting, what is about to happen, and where, when, and why it is about to occur. Nevertheless, the costs of responding to warning are real, known, and quite tangible, a fact that often leads policymakers to wait and see what develops before they take actions that could bring about a situation that they would prefer to avoid. Surprise attacks, however, unfold on the narrowest of technical, tactical, and operational margins because they ipso facto rely on surprise to succeed. As a result, policymakers do not necessarily have to go “all in” when it comes to responding to somewhat imprecise and uncertain signals and analysis of what could transpire. Instead, they could adopt modest and relatively low-cost changes in alert postures, standard operating procedures, intelligence surveillance, and reconnaissance activities or even diplomatic activities and public diplomacy that might potentially force an opponent to reassess their prospects of success, especially their prospects of gaining the element of surprise.

In the days leading up to the 9/11 terror attacks, the President’s Daily Brief lacked specific actionable intelligence about al-Qaeda’s scheme – it reported “rumors” of the organization’s interest in hijacking airliners. Would a modest observable change in airport security procedures—alerting screeners to the threat, increased presence of security personnel at checkpoints, announcing that federal agents would be carrying firearms on board commercial aircraft—have forced al-Qaeda to reassess their plans, buying time for law enforcement to roll up the plotters? By not painting the requirements, risks, and costs of response in such draconian terms, it just might be possible to increase competent policymakers’ receptivity to what will be inevitably less than certain or compelling signals, analysis, and warning of an opponent’s impending action.

CONCLUSION

Several observations emerge from this overview of the scholarship prompted by the two propositions that comprise Betts’s paradox of intelligence failure. One observation is that, while the first proposition—there will always be accurate signals (or even useful analysis and warnings) in the “pipeline”—constitutes a necessary point of departure for intelligence studies, it also has acted as a bit of a “red herring.” No one has objected to Levite and Dahl’s point that the signals available to analysts and decisionmakers before the Battle of Midway were far more accurate, detailed, and compelling than the signals available before the attack on Pearl Harbor. Nevertheless, suggesting that accurate signals are necessary for intelligence success does not demonstrate that they are sufficient to prevent intelligence failure. Most scholars still accept Betts’s proposition—that signals and analysis of admittedly varied quality and timeliness are available to policymakers before some strategic surprise and associated intelligence failure. In any event, it is time to move on; if signals (or even useful analysis and warnings) are a constant, they cannot explain the variations in outcomes between intelligence failure and intelligence success.

Another observation emerges as one compares the scholarly reaction to the paradox of intelligence failure in its entirety. On the one hand, both Levite and Dahl treat the issue of response almost as an exogenous factor in their explanation of intelligence success—they assume that decisionmakers will respond in a rational and effective way when presented with accurate and compelling signals, analysis, or warning. On the other hand, those who address the proposition concerning the inevitability of intelligence failure focus on the decisionmaker as the critical variable. Bar-Joseph and McDermott suggest that it is a mistake to assume that decisionmakers will respond in a rational and effective way to accurate and compelling signals, analysis, or warning. Others have suggested that the way the likelihood of effective response to warning can be increased is by suggesting ways decisionmakers might recognize that it is time to be on the qui vive. Methods might also be adopted to minimize tradeoffs inherent in responding to warning, making it less likely that decisionmakers will adopt a wait and see attitude. When combined, these two lines of effort suggest that receptivity to warning is the fulcrum on which the paradox of intelligence failure rests.

This points to a third observation concerning what would have to be achieved to tame the paradox of intelligence failure. To overcome the paradox, one would have to overcome the most common cause of failure noted by Betts: collecting data, communicating data, and analysis to decisionmakers, and helping decisionmakers recognize the validity or relevance of the information provided. This is exactly the point illustrated by Levite and Dahl in their discussion of the Battle of Midway. It was not just the presence of high-quality signals in the intelligence pipeline that guaranteed success. Instead, it was the ability of the defenders of Midway to complete the entire process of collection, analysis, warning, and response that produced success. By contrast, as Bar-Joseph and McDermott suggest, this process can be sabotaged by leaders who project their own personal or policy desires, character flaws, or mental disorders onto those around them, while others have suggested that a general failure to recognize the urgency of a warning situation, and a general reluctance to take costly action in response to warning of uncertain accuracy, stymies effective response.

In all the cases of intelligence failure and success encountered in this article, systems were “blinking red” in the weeks, days, or even hours before the occurrence of some untoward event. Nevertheless, the presence of varying degrees of accurate and credible signals, analysis, and warning failed to produce an effective response or, in most cases, no response at all. The one exception to this pattern of failure was the Battle of Midway. In this lone instance, analysts and officers shared a common view of the general threat faced and the immediate urgency of the situation. They also shared a common understanding of who needed analysis and warning and who was responsible to act on that warning. In other words, intelligence was used effectively at Midway because it benefited from effective and unbiased staff work that targeted the right individual for support and a decisionmaker who not only knew they were responsible for acting on warning, but who was also capable of using warning in a rational way to improve their chances of heading off disaster. The solution to the paradox of intelligence failure cannot be found in any one stage of the process of collection, analysis, warning, and response, but in efforts that guarantee the entire process, despite less than perfect execution of key elements, is completed. After all, if warning that exists only in the minds of analysts is useless, then warning that reaches decisionmakers who are unsure of their role or are incapable or unwilling to act on it is simply a waste of resources. Intelligence failure is still inevitable, but those who consider the warning and response problem in its entirety can create a situation whereby intelligence failure and an associated surprise attack become less likely.

No comments: