Angus Fletcher and Thomas Gaines
It’s an easy morning outside Washington, DC. But we’re making things hard on an Army student.
“Good plan,” we say. “Now give us another.”
The student’s brow furrows. What he’s wondering is: Why would I come up with another plan when my first plan is good? But he’s a dutiful soldier, so he tries to comply. And it’s there that he hits his real mental block: How do I come up with another plan when my first plan is good? After all, if nothing is wrong with my first plan, then what could be productively changed?
That the student would think this way is pure logic. Logic’s core teaching is that there’s one optimal decision, one error-free plan. If that plan has been identified already, it’s thus not only pointless but impossible to come up with a smart alternative. Yet is logic right about this? Is there always one ideal course of action?
To tease out the answer, let’s start by being precise about logic. Logic has many colloquial meanings, but strictly speaking, it’s the formal system of syllogistic induction and deduction defined by Aristotle in his fourth-century BC masterwork Organon; practiced by philosophers from Thomas Aquinas to Immanuel Kant to George Boole to William Stanley Jevons to Gottlob Frege to Bertrand Russell; taught across the globe as data-driven decision-making, evidence-based reasoning, and critical thinking; hardwired into the computer brain (the arithmetic logic unit, or ALU) to generate mathematical spreadsheets, machine-learning protocols, and fact-crunching algorithms; and drilled into the twenty-first-century US military through PowerPoint slides and standard operating procedures.
Logic has achieved this ubiquity in smart human systems because, as Aristotle proved and modern computer science has confirmed, it’s a potent intellectual aid. Given enough data, logic is always correct. Given a timeless rule, logic can execute perfectly. Given total environmental supremacy, logic can maximize efficiency.
Yet these powers don’t make logic omnipotent. In fact, quite the opposite. They make logic very fragile. Why? Well, in life, there is never enough data, the rules are always evolving, and logic can never completely impose its will. That’s because life is biology, and biology is evolution by natural selection. It’s a contest that oozes fog and friction. It’s a clash in which each competitor is constantly striving to out-innovate the others. It’s a volatile, uncertain domain that runs on chaos, emotion, and creativity. It is, in other words, war.
Which is why, as Carl von Clausewitz observed two centuries ago, life is continually breaking logic. And why, as Napoleon noted, “Faire son thème en deux façons”—you always need a second option. Your first plan can be optimal. Hell, your first plan can be mathematically ideal. But life doesn’t care. Life will shatter it. And then you will need another plan. Or you will die.
That’s the hard news. Here’s the easy: Your brain can invent another plan. And it can do so without difficulty. All you need to do is hit pause (temporarily) on logic.
Pausing Logic
The animal brain was born, hundreds of millions of years ago, into a hazy world of emergent threats and opportunities. To survive those threats and leverage those opportunities, the animal brain evolved a special mental mechanism: narrative.
Narrative is a nonlogical mode of intelligence that operates by connecting causes to effects, or, more technically, by positing causal relationships between physical actors and actions. Narrative speculates on the origins of events and predicts the outcomes of maneuvers. It strings together doings and doers (animate or otherwise) into long, branching stories. And it does so not via ironclad laws of deduction but via flexible possibilities for motion.
This flexible method means that narrative is often wrong. Unlike logic, which computes what must be, narrative surmises what could happen. And what could happen isn’t what will happen. It’s just one possibility among many, leading overconfident narrative thinkers into the potentially fatal error of conflating a plausible hypothesis with a certain truth.
But even though narrative is frequently incorrect, it’s still an extraordinarily useful mental tool. That’s because narrative can operate in low-data (and even no-data) environments. In such environments, narrative allows our brain to guess what could work, or in other words, to make a tentative plan. And then narrative goes further. It allows our brain to predict what will happen if our tentative plan starts to work. If the prediction bears out, our brain leans into the plan. If not, our brain switches to another tentative plan and ventures again.
Narrative, in other words, equips our brain to run the scientific method of practical prediction and experiment, making educated guesses that we refine by trying them through action.
This method enabled our brain to survive—indeed, thrive—in the unstable, unknowable ecosystems in which early humans existed. And it remains human intelligence’s root source. That source is not logic, because the human brain, unlike computer AI, is not especially logical (if it were, math and critical thinking wouldn’t be so hard; we’d all be born statisticians who always acted free of bias). The human brain is, however, innately adept at narrative. Children don’t have to be taught how to imagine new stories; their brains do it naturally from birth, with a speed and flexibility beyond any other species. So it is that we humans can imagine pioneering technologies, plot unprecedented futures, and invent original strategies. So it is that we have concocted plans that have conquered everything on earth (except each other). And so it is that when a good plan fails, we can always craft another.
Why then is our Army student having such a hard time? Why is his brain not doing what nature has equipped it to do? Because the student has been trained out of his instincts. He has been drilled so hard in logic that his narrative powers have atrophied—and even when he begins to flex them back to life, he no longer trusts them. He sees them, from the perspective of critical thinking and statistical data, as naive guesswork. Which makes him embarrassed to voice his narrative speculations in public.
Our first step in the classroom is therefore to restore the student’s confidence in his biology. It’s to encourage him to reactivate the narrative machinery that makes up most of his intelligent neuroanatomy. And it’s to get him to exercise that machinery, growing its potency through use, like an arm muscle strengthened by pull-ups.
At which point, we reach the second step: going beyond biology into artificially enhancing the student’s cognitive performance. Because as recent work in neuroscience and narrative theory is demonstrating, we can do more than activate the human brain’s latent powers of narrative intelligence. We can improve them.
Improving Narrative Intelligence
Because narrative is nonlogical, and because logic has traditionally been viewed by the US military (and the American educational system) as the only trainable form of intelligence, there is currently no broadly implemented curriculum for improving narrative intelligence. That curriculum, however, exists. It has been successfully piloted in award-winning undergraduate and graduate coursework developed at Ohio State’s Project Narrative, the world’s leading academic institute for narrative theory. And although the curriculum is intrinsically nonlogical, it is based upon a scientific method, rooted in neuroscience and evolutionary biology, that augments the brain’s cognitive performance in empirically measurable ways.
This curriculum, we believe, can have significant applications for the US military. To illustrate its potential range and utility, we’ll outline three examples from our current research, conducted with Army teams from the Command and General Staff College (led by Dr. Richard McConnell) and the special operations community.
The first example is a suite of new techniques for boosting creative thinking. Creative thinking is the neural tool that our brain uses to invent original tactics, strategies, and plans of action. And at present, almost all the training in creative thinking provided by the US military (and also by US universities and corporations) is rooted in logic. It emphasizes processes such as divergent thinking and design (both of which computers can do far better than humans). And it neglects narrative (which computers cannot perform at all).
To add that narrative component, empowering our neurons to devise more original strategies and action plans, we need to feed our brain the opposite of spreadsheet data. We need to feed it exceptional information. Exceptional information is the exception to the rule, the statistical anomaly, the rogue datapoint that AI regresses to the mean. In life, that n = 1 is the first indication of an emergent threat or opportunity. It’s a sign that the environment is changing or that a novel actor has appeared. It’s a prompt to start evolving our behavior.
For all these reasons, our brain evolved to notice exceptional information as critical for our adaptive performance. But logic—and modern education—has trained our brain to filter it out. After all, in the mostly stable and knowledge-rich environments of advanced human civilizations, exceptional information is a usually just random noise. It’s not a signal of potential change because the system isn’t changing. Better, then, to just ignore it, which is what the dutiful military brain does. Most of us register fewer than one percent of the exceptional information in our environment—and high achievers at standardized tests and regular operations typically notice even less.
To act creatively, formulating the unconventional actions necessary to thrive in the volatile uncertainty of contested spaces, we have to reverse this civilized habit. We have to train ourselves to focus on what is different about similar objects, rather than abstracting them into general types. We have to practice identifying changes rather than enforcing routines. We have to, in short, un-logic our thinking. Which is what we’re working with the Army student to do. And while the work is not easy, it is productive. The more that the student is able to detect exceptional information, the more he’s able to make another plan—and keep making more plans, adapting fluidly as the situation evolves. As one of the student’s regular instructors observes in response to our training: “Creative thinking training is absolutely critical in enabling organizations to evolve in an increasingly complex world that requires precise solutions.”
The second example is communication. Communication in the military is currently a paradox: it has been made precisely regimented yet remains gallingly inefficient. Perfectly lucid orders are dispatched and then unpredictably misunderstood, leading to confusion and disorganization. This paradox reflects the fact that the human brain has evolved to think primarily in narrative (see above). When a human brain is given a logical instruction, the instruction therefore does not seamlessly compute. Instead, it collides with the brain’s nonlogical nuts and bolts, triggering sputters and misfires.
To solve this communication glitch, the US military has tried to make human brains more logical, and in stable domains, with consistent psychological reinforcement, that approach can work. But in fast-changing environments, it invites error and muddle. In such environments, the most effective way to improve communication is instead to work with the brain’s inbuilt narrative cogs, which we can do via two simple reforms:
Replace logical definitions with narrative actions. That is, rather than abstractly saying what something is, military orders should supply a concrete example of what to do. For example, when we’re asking a student to come up with a different plan, we shouldn’t say: “The fourth component of developing a course of action during the military decision-making process is distinguishability.” Instead, we should say: “Imagine that your opponent has blown up every shred of your original strategy. Explain what you do now.”
Limit information. Current military communications, whether in email or PowerPoint, are awash in data. Data is great for logic, which is why computers get smarter the more information they have. But human brains, because they’re primarily narrative, struggle to handle more than three datapoints concurrently. If a fourth datapoint is added, the brain’s grip on the other three will generally erode. An effective way to improve communication is therefore to accept that less is more. Chop down PowerPoints to three slides; distill instructions to a trio of commands that generate all essential action. Be incisive instead of comprehensive. Otherwise, your efforts at clarity will prove counterproductive.
The third example from our research is the innovation of human-AI partnerships. Human-AI partnerships are currently optimized to perform logical decision-making but not narrative cognition, which means that they maximize AI performance without also seeking to maximize human mental performance, making the partnership less than the sum of its parts.
To address this shortcoming, we’re working to pilot a new approach to human-AI partnerships with noted tech entrepreneur Erik Larson and teams across the US military. That approach employs a two-part narrative protocol to:
Train up the narrative skills of AI’s human operators.
Program AI to (a) identify when the data necessary for computation does not exist and (b) respond by shifting authority to its human operators.
This protocol optimizes the intelligent performance of human-AI systems by maxing out the distinct capabilities of both the human brain and the computer ALU. When AI recognizes that a situation is too low-data for logic to operate, it transfers power to an operator who is trained to deploy narrative to innovate and maintain initiative, leveraging the human brain’s core cognitive strength (adaptability in volatile and uncertain environments) to counter AI’s core cognitive weaknesses (fragility under those same conditions).
These three examples of narrative training aren’t procedurally difficult to implement. But they’re challenging, culturally and emotionally, because they jar with the logical habits instilled by modern military education. To break through the discomfort of the new, it can therefore help to keep in mind two overarching points. First, although narrative cognition is nonlogical, it can have inductively quantifiable outcomes: Dr. Kenneth Long, associate professor of logistics at the Command and General Staff College, has estimated that an emphasis on exceptional information and other narrative processes has the potential to save the US military $30 billion annually in measurable efficiencies. Second, a commitment to narrative does not equate to a rejection of logic. In fact, quite the opposite. Narrative thinking is high gain but also high risk, so it’s best reserved for situations when the existing rules are not working or when data is too sparse for rational computation. The rest of the time, the default of intelligent organizations should be logic.
For modern militaries, the smartest overall strategy is thus to treat logic and narrative as complementary forms of intelligence. Logic is extremely useful in the predictable, information-thick environments outside of combat. But when harmony breaks down, escalating volatility and uncertainty, the most effective route to a new armistice is to engage our brain’s narrative hardware. Which is why, even during peacetime, it’s a good plan to pause our logical standard operating procedures to train creative thinking, practice battlefield communication, and promote innovation.
So that when life’s primordial chaos returns, the hard can go a little more easy.
No comments:
Post a Comment