David V. Gioe
It was a clear autumn day in Washington, DC on October 27, 1941, when President Franklin Roosevelt delivered his Navy Day speech to the American people. Halloween was later that week, but Adolf Hitler and his Wehrmacht war machine were scaring the administration. Roosevelt used his address to highlight the threat posed to the Western Hemisphere—America’s hemisphere—per the longstanding Monroe Doctrine. The Japanese surprise attack on Pearl Harbor was six weeks hence, and Americans were leery of getting involved again in Europe’s perennially bloody wars. Charles Lindbergh and the “America First” movement, which represented America’s isolationist current, objected to greater involvement. Roosevelt needed to make the case that the Nazi threat to America was real. He noted, earlier that month, that a German U-boat had attacked an American destroyer, the USS Kearny, causing eleven American combat fatalities. “America has been attacked,” Roosevelt declared. “The USS Kearny is not just a navy ship. She belongs to every man, woman and child in this nation. . . . Hitler's torpedo was directed at every American.”
Roosevelt left out the minor detail that the Kearny was busy raining down depth charges on a German U-boat when she was torpedoed. In case the attack on the Kearny wasn’t enough to convince skeptical Americans of Hitler’s devious transatlantic designs, Roosevelt pressed his point with further evidence: “Hitler has often protested,” Roosevelt continued, “that his plans for conquest do not extend across the Atlantic Ocean. But his submarines and raiders prove otherwise. So does the entire design of his new world order,” Roosevelt stated ominously. “For example, I have in my possession a secret map made in Germany by Hitler's government . . . of the new world order.”
“It is a map of South America and a part of Central America, as Hitler proposes to reorganize it . . . into five vassal states, bringing the whole continent under their domination . . . [including] our great lifeline—the Panama Canal. . . . This map,” Roosevelt thundered, “makes clear the Nazi design not only against South America but against the United States itself.”
In addition to millions of Americans tuning their radios in to Roosevelt’s revelations of Nazi treachery, the Germans were listening too. They vociferously denied the authenticity of Roosevelt’s map, but then again, it was marked “Geheim,” (Secret), so of course they would disown it, wouldn’t they? German propaganda minister Joseph Goebbels rejected FDR’s “absurd accusations.” In his estimation, this was a “grand swindle” intended to “whip up American public opinion.” The problem is that Goebbels was right. The map was a forgery. He didn’t know who the real authors were, but British intelligence did—because it was they.
Operating out of the forty-fourth floor of New York’s Rockefeller Center, the remit of the vanilla sounding British Security Coordination office was, in part, to get America into the war. The Roosevelt administration was already reaching across the Atlantic with all sorts of civilian and military aid, but it wasn’t coming fast enough during the dark days of the Blitz, and each new initiative to support the British was met with howls of indignation by the isolationists in Congress. Prime Minister Winston Churchill believed that the Americans would eventually get around to doing the right thing; they just needed a prod in the right direction—a prod in which the ends justified the means.
British intelligence recalled their previous success at stoking America’s ire for war when, in February 1917, desperately seeking American entry into the Great War, they passed the Americans the infamous Zimmermann Telegram, albeit with a phony cover story to hide the fact that they were routinely breaking American diplomatic codes. The Zimmermann Telegram, intercepted and decrypted by British codebreakers, offered a secret deal in which the Germans promised to return New Mexico, Arizona and Texas to Mexico if the latter would declare war on the United States in the event that Washington declared war on Germany.
In that instance, the British artfully used the telegram, authored by German foreign minister Arthur Zimmermann, to overcome characteristic American concern about foreign entanglements. President Woodrow Wilson was in a pickle. Just a year earlier, in the election of 1916, correctly sensing the national mood of nonintervention, he had run and won on the slogan, “He kept us out of war.” Now he felt that war might be inevitable, but how to reverse himself? A week after Wilson received the telegram from the British he authorized its publication in (or, in today’s vernacular, “leaked it to”) the media. It made the front page on March 1, 1917.
Notably, many Americans suspected the dark arts at play, assessing the telegram as a forgery. The wind was taken out of their conspiratorial sails when, two days later, none other than Arthur Zimmermann himself helpfully confirmed that his telegram was genuine. By the next month, America was at war. Not only did the British weaponize the explosive content of Zimmermann’s telegram, but Wilson used it to change tack as well, each to their own political ends.
The Zimmermann Telegram was but one of several factors, including German unrestricted submarine warfare, which led to American entry into World War I. The Americans tipped the balance in favor of the Entente powers and Germany was forced to sign the punitive Treaty of Versailles in 1919, but such a peace could not last, and exactly two decades later the European powers were again at war.
The sequel to the Great War in Europe had been raging since 1939, the German Blitzkrieg seemed unstoppable, and Britain stood alone against it. By late 1941, America had made itself the “arsenal of democracy,” but, as the Kearny incident showed, it actually went much further than that. In addition to Lend-Lease and similar arrangements, American warships and planes patrolled much of the Atlantic convoy route, guarding ships packed with millions of tons of American products—Britain’s tenuous lifeline for survival.
Still, the lost tonnage projections for transatlantic shipping were unsustainable. In a war of material, Britain was going to lose, whereas America’s population and industrial potential were still largely untapped. The supplies sent by America were critical, but if Britain was going to do more than lose slowly, America needed to go all in. But where was the next Zimmermann telegram to help this president lead his country to war? It seemed that, although Britain had affixed keys to every kite it had, lightning was not going to strike twice, but just maybe, this time they could put lightning in a bottle for FDR.
Britain’s senior intelligence official in the United States, Sir William Stephenson, sat atop the British Security Coordination office and became fast friends with Roosevelt’s decorated protointelligence chief, William J. Donovan, acting as coordinator of information, the forerunner of the wartime Office of Strategic Services, itself subsequently reassembled in 1947 as the Central Intelligence Agency. Donovan and Stephenson were birds of a feather. Self-made wealthy men, internationalist in outlook, and both combat heroes of World War I. Stephenson referred to the avuncular and paunchy Donovan as “Big Bill,” and Donovan affectionately labeled the smaller and trimmer Stephenson as “Little Bill.” Years later, Donovan opined, “Stephenson taught us all we ever knew about foreign intelligence”—although perhaps some lessons were learned the hard way.
Despite the bonhomie between the “Bills,” Stephenson was using his friendship with Donovan to run unilateral propaganda operations against the isolationists in American chattering classes. It was in this context that Little Bill handed the fake map (amongst other forgeries) to Big Bill, who presented it to FDR as a cat brings a mouse to its master, perhaps, reminiscent of contemporary news media, not lingering over questions of authenticity because it was a scoop that Donovan had over his rivals in the military branches and J. Edgar Hoover at the FBI.
The State Department, on the other hand, assessed British “intelligence” relating to Latin America as forgeries, even complaining to the British Embassy about it. Assistant Secretary of State Adolf Berle, a man for whom his intelligence portfolio was a bothersome sideshow, was on the right track with his skepticism about the British intelligence that Donovan was feeding to Roosevelt. He had concerns regarding reliability and veracity of the volumes of British intelligence that was finding its way to the Oval Office. Berle told his boss, Secretary of State Cordell Hull, “British intelligence has been very active in making things appear dangerous [in Latin America] . . . I think we have to be a little on our guard against false scares.”
Despite Berle’s suspicions, Roosevelt was not informed of the differing analytical lines in his bureaucracy. In fact, after the Germans cried foul, responding to a question about the map’s authenticity, Roosevelt claimed the source was “undoubtedly reliable.” One scholar pronounced “the most striking feature of the episode was the complicity of the President of the United States in perpetuating the fraud.” Another historian commented that British forgeries like the map were “truly a frontal assault on the rules of evidence.” Yet, like the Zimmermann Telegram twenty-four years earlier, the map served both the author and recipient’s intended political purpose.
Roosevelt died, in all probability, ignorant of the map’s true provenance, but whether or not he, like Assistant Secretary Berle in Foggy Bottom, personally harbored any suspicions about the map’s authenticity, on that sunny day in October 1941, Roosevelt needed it to be true, and that was what mattered. Thus, although the British put forward the fake news, Roosevelt was a willing political vessel for it. Several commentators have observed that fake news is not a fraud perpetrated on the unsuspecting, but rather willful belief. A shrewd political operator, Roosevelt was no novice to narrative shaping, but was likely willing to suspend disbelief for his policy goals. Indeed, the mud of deception often slides into self-deception.
One commentator asserted that the purpose of fake news “is not to pose an alternative truth . . . but to destroy truth altogether, to set us adrift in a world of belief without facts, a world where there is no defense against lies.” Actually, the purpose of fake news isn’t to destroy truth; it is to manipulate, to weaponize information, made out of whole cloth at times, to achieve political or societal goals. America is no more “post-truth” than it is “post-gravity”; it’s just that the terrible repercussions will take longer to drop. Information alchemy is about weaving straw into golden political outcomes. Several commentators have suggested that, during the 2016 presidential election, Russian president Vladimir Putin sought to engender a general crisis of civic confidence in the American electoral system. That’s a nice byproduct from his point of view, but even he knows full well that he can’t destroy American democracy—he just wanted to manipulate it toward his own ends.
Likewise, the saga of the fake map wasn’t a British assault on truth as such; it wasn’t intended to cloud the American people in an epistemological fog in which it was unclear who were the aggressors in Europe. The British needed a political—and by extension military—outcome and they assessed that the best way to do this was through bespoke disinformation.
In today’s deluge of information and disinformation, enabled in part by social media as news propagation outlets, the solution most proffered is “consider the source” as a way to separate wheat from chaff. Media outlets are trying to outcompete each other to earn their reputational halo. But, in the case of the fake map, Little Bill was a usually reliable source, and, if the British couldn’t be trusted, who could be? Indeed, the fall comes hardest when betrayed by trusted friends, and whom we admire. CIA’s own webpage homage dedicated to Stephenson is notably silent on the specifics of his greatest deception.
CIA has matured immeasurably from the heady and freewheeling days of the OSS, partly through the progression of intelligence officers from glorious amateurs to seasoned professionals, and partly in response to lessons learned from mistakes. Professional intelligence analysts are put through a rigorous analytical training pipeline that includes how to structure analysis, how to weigh sources, and how to consider competing hypotheses. They are taught that one analytical conclusion isn’t equally as valid as another, and that nuances matter. They are taught to figuratively interrogate sources and to consider the source’s purpose in providing information, and who was the intended audience? On the operational side, most raw intelligence generated by CIA’s case officers bears a health warning, a sort of caveat emptor, reminding analysts of what they should already know: “The source of the following information knew their remarks could reach the U.S. Government and may have intended to influence as well as inform.”
And in fact, many tools that intelligence analysts use every day are those that are borrowed from the practice of history, with critical thinking and a skeptical mind at the top of the list. The analytical cadre of Donovan’s nascent intelligence bureaucracy was staffed with the best minds from leading universities, raising questions about whether Donovan, in his haste to please his intelligence consumer in chief and scoop his rivals, even stopped for any analytical analysis on what would be considered raw liaison intelligence.
Not everyone needs to be professionally trained as an intelligence officer or historian to wade through sources, but Hugh Trevor-Roper was both. To apply his craft to approaching a primary source, he listed three questions that should be asked about every document: Is it genuine? Was the author in a position to know what he was writing about? And, why does this document exist? Answers to these questions are the handmaidens of trusting information and halting the malign influence of fake news. Perhaps, before passing the map to Roosevelt, Donovan should have heeded the wise counsel of a different British subject, the historian E.H. Carr, who commanded: “interrogate documents and . . . display a due skepticism as regards their writer’s motives.” Indeed, what intelligence analysts have in common with historians is that the best of the bunch are skeptics.
One practical way that skepticism ought to manifest itself in considering the source was offered by historian and strategist B. H. Liddell Hart: “On every occasion that a particular recommendation is made, ask yourself first in what way the author’s career may be affected.” Or, as the Romans may have inquired, “cui bono?” Who benefits? Maybe this level of skepticism sounds paranoid, but as the aphorism goes, you’re only paranoid if there is no plot. Or applied to the twenty-first century information wars, deception.
While considering the source is necessary, it is not sufficient—it’s a shortcut that too often turns into a handicap. Fact-based and objective reporting and analysis is surely the gold standard, but information consumers also have a role, even a civic obligation as citizens to take some responsibility for what they allow themselves to consider as truth. It is the manifestation of this shortcut crossed over to handicap that demands Facebook or Twitter do a better job of curating information on their platforms. It elides society’s individual responsibility for skepticism and critical thought in the evaluation of evidence and argument. For the same reason that diet pills don’t work, it’s just not that easy. Seeing results is going to take some discipline. Social-media sites, amongst others, are appropriately required to weed out extremist or illegal content, but filtering information is a more challenging feat. It would be convenient if they can run an algorithm and block bots and trolls, but disingenuous information and especially fraudulent analysis of facts would still remain. There is no Internet filter or setting that can remove conspiracy theory from the digital public square. Moreover, that might not be desirable in any case. It may be worth considering whether technological convenience, rapidly morphing into dependence past the point of no return, may have a causal relationship to America’s contemporary intellectual helplessness.
Perhaps technology companies will develop a genius algorithm to filter out Russian bots and disable some troll accounts, but this will not stop overly credulous people from retweeting, sharing and reposting “news” that bears as much semblance to reality as CheezWhiz does to cheese. Despite significant strides in artificial intelligence, artificial intelligence remains ineffective against intellectually dishonest analysis, non sequitur conclusions and ideological spin. It is therefore dubious to hope social-media sites will become guardian curators of fact-based knowledge and objective journalism. But there is no reason to rely on technology companies to solve the problem of fake news. The do-it-yourself tools are readily available.
How to begin to learn how to discern fake news? By rediscovering the broad civic applicability of the historical method. It starts with modifying the national epistemological approach to acquiring knowledge, and, applied across the population of the United States, the impact could be profound.
Quite when America started deviating from critical thinking is unclear, but a test of American college students, the College Learning Assessment Plus (CLA+) shows that, in over half of the universities studied, there is no increase in critical thinking skills over a four-year degree. The reasons for this are far from clear, but the pursuit of knowledge has become more argumentative, opinion-based and adversarial than illuminating. Research papers are reminiscent of watching the prosecutor layout a criminal case on Law and Order.
The CLA+ findings track with an informal survey of professors’ experience at over a dozen American (and some international) universities. In the main, here is how a research paper usually unfolds: Students set out a thesis, about which they know very little at the outset, but about which they already seem to have well-developed or even passionate opinions as if they have skin in the game, as if their thesis is personal and deeply held. They spend the rest of the paper proving the validity of these opinions, like a court case, beyond a reasonable doubt. They comb through material, hunting for those key nuggets of evidence that support their thesis, and ignoring those equally important discoveries that don’t support their narrative. In the worst cases, logical fallacies are waived away because a conclusion “feels right” or “should be true.” Once enough similarly suggestive nuggets are accumulated, they are listed like items entered into evidence, often devoid of argumentation or theoretical framework. Moving onto their conclusions, they again restate their strongest bits of evidence, and pronounce their thesis proved; case closed. Rediscovering the historical method and teaching the difference between argument and assertion offers promise.
The starting point is to have a research question in mind. It is not a thesis at the outset; it is a question to be answered, ideally with bias explicitly stripped out of it. Working on the research question itself takes a great deal of time, phrasing and rephrasing, testing and reformulating it for just the right construction. The net difference may be only a carefully excised word, but the effect on the rest of the project can be significant. It might be the difference between, “When did Saddam Hussein restart his WMD program?” and “Did Saddam Hussein restart his WMD program?” One is an important question for national security. The other contains a presupposition that led a country to war. Likewise, “Why is Kim Jong-un an irrational actor?” is a separate question from “Is Kim Jong-un an irrational actor?” American policy toward an international pariah with nuclear weapons hinges on the answer.
Once the research question is established, a method of inquiry is needed, a process by which the question might be answered. Research is a voyage of intellectual discovery where unexpected information is not unwelcome because there is no initial thesis to prove as yet. Most questions, of course, don’t have easy answers and there are usually good arguments and reliable sources on both sides. The important thing is to grapple with all of the information and not cherry-pick supporting evidence or selectively exclude contradictory evidence. Historian J. H. Hexter has argued that, rather than seeking evidence to bolster an initial thesis, one should actively seek out evidence that challenges or even disproves it. This is the heart of research and analysis, and again calls on the craft of the historian to weigh the credibility of sources, to consider the original purpose of documents, and to test whether a conclusion makes sense based on other knowledge.
Finally, before a thesis is published as news or breathlessly retweeted, it would be better to pause and consider on what basis the arguments might be criticized, and in light of these weaknesses, shore up analytical flanks. Intellectual “stress testing” is as important to public discourse as financial stress testing is to banking, yet seems to be as a lost tradition on college campuses as in American politics.
If the method just outlined seems like a foreign land for students, information consumers, politicians and an uncomfortable plurality of media outlets, then what went wrong? Why can’t America reliably separate out fact, falsehood, opinion and reasoned analysis? As the CLA+ test suggests, the American educational system is responsible to some degree for not exposing students to critical thinking—or maybe demanding some mastery of it before awarding a degree, but what happened, at a national level, to skepticism of claims, to questions without baked-in bias, to critical thinking?
It seems that in polarized America, political and ideological precommitments have superseded a skeptical mindset and even the desire, if not capacity, for critical thought, thus leaving America frightfully vulnerable to fake news. Consider a scenario in which, upon proper methodological inspection, one was to discover new information or arguments that demonstrate an error in a personal stance. Would one swallow his pride and update his views, especially on social media—the platform of personal record. What would one’s political tribe say after such an admission? Would they, following the scientific method, attempt to replicate the experiment to verify the results and update their own thinking? Or would there instead be a feeling of political or social betrayal? Princeton Professor Jonathan Haslam considered it a sign of intellectual immaturity to read only those thinkers who reinforce preexisting beliefs. If that is the case, is American society intellectually regressing in a closed echo chamber of self-reinforcement? In an age of fake news, considering whether information is true is less important than whether it is useful for supporting a worldview, or discounting that of others. Thus, the utility of information is now more important than its veracity.
Professors don’t help matters when their syllabus requires that students only use certain pre-vetted source material for research projects. These well meaning professors are trying to help students avoid relying on fake news or highly biased books, journal articles and websites, but this is actually harming students because it doesn’t require the students to critically evaluate sources or material. Instead, it is the equivalent of intellectual training wheels for students well past their primary education. Students with helicopter professors learn that they just need to color inside the safe lines and they can’t go too far astray. Yet by circumscribing the known world of pre-vetted material our universities are failing to live up to their mandate to prepare students for the real world, where there won’t be anyone to screen what information these newly minted alumni are consuming. It would be better to let students make—and then learn from—their mistakes in the structured halfway house of being an adult than protect them until graduation and then push them out of the nest into a world of fake news and disinformation, of which they have had no previous experience. This increasingly widespread practice of benevolence in university courses probably also directly contributes to the increasing helplessness, thus vulnerability, of post-university civic life.
This is more than one elbow-patched curmudgeon’s complaint about American education and society; suggested here is a timeless strategy for defense from fake news. It is not foreordained that America is doomed to a virtual future of reflexive retweeting and conspiracy theories parading as news. Encouraging a different approach to discovering knowledge is part of the solution to an overly credulous population. As was demonstrated during the 2016 election, Americans seem particularly vulnerable to information war, and it appears that Putin’s minions will be back again in 2020 and beyond. Yet rediscovering the tools of the historian—skepticism and critical thinking—can help develop a more resilient national character as a key pillar of future American security.
David V. Gioe is an assistant professor of history at the U.S. Military Academy at West Point, where he also serves as the History Fellow for the Army Cyber Institute. Before coming to academia he served as an intelligence officer in both analytical and operational capacities. This analysis expressed here is his own and represents no government agency or department.
No comments:
Post a Comment