September 1, 2016
Psibernetix, it seems, has built an artificial intelligence smarter than a fighter pilot. As I mentioned here at the beginning of the month, the company hatched at the University of Cincinnati has developed software for a Raspberry Pi machine that has defeated at least one retired USAF fighter pilot in simulated combat. Yes, there’s something inherently scary about that. But before the RAAF and every other air force get too excited about drones that could turn on us, we might consider what “inelegant messes” dwell at the heart of the software suites that enable their capabilities. Samuel Arbesman, scientist-in-residence at Lux Capital and a senior fellow at the University of Colorado, argues in his new bookOvercomplicated: Technology at the Limits of Comprehension (Current, 2016) that engineers’ endlessly adding to software without understanding the underlying code is “making the world indecipherable.” The question is what can be done about it.
As Dan Goodin recently wrote for Ars Technica, this is true even in cyber. The malware platform GrayFish, “the crowning achievement of the Equation Group” at the NSA, “is so complex that Kaspersky researchers still understand only a fraction of its capabilities and inner workings.” Think about that—Eugene Kaspersky’s people, who make up one the biggest cyber security outfits on the planet, have the code—all the code—and they still can’t figure out what it does. That won’t always be the case for stolen cyber weapons—and that’s a whole other talk show—but the failure is ominous.
Inquiry in the field of software complexity is difficult, so Arbesman's book is in places a not-fully-satisfying collection of anecdotes, interspersed with occasional references to promising research. He notes how Mark Flood and Oliver Goodenough of the US Treasury’s Office of Financial Research have written of how contracts are like code. Representing them as automata thus renders legal analysis rigorous. Maybe, but as Keith Crocker and Kenneth Reynolds wrote back in 1993, aren’t contracts sometimes efficiently incomplete? At times, it’s actually better to agree to figure stuff out later, as the need arises. Their case was “An Empirical Analysis of Air Force Engine Procurement” (RAND Journal of Economics, vol. 24, no. 1), so the argument may still have some relevance to defense contracting.
That points to the human element in these messes. Kluges are everywhere, but they’re not wholly for good. Grafting TurboTax atop the disastrous US tax code cut the cost of compliance, but did little to improve the over-complex set of incentives and penalties that even the accountants can’t fully grasp. As my colleague Steve Grundman once wrote, citing Irving Janis and Leon Mann’s Decision Making (Free Press, 1977), efforts in induce intricate resource allocation planning fail because “people simply do not have the wits to maximize.” Their subtitle, A Psychological Analysis of Conflict, Choice, and Commitment, says much about the problem.
MIT PhD student and USAF officer Christopher Berardi recently described the military’s latest software projects to me as systems “beyond human comprehension.” They’re so big and complex that they can’t be reliably secured, but only patched after a problem emerges. I think that he senses a parallel in organizational problems. In his essay on “The Importance of Failure and the Incomplete Leader,” he argues that
The complete leader must have the intellectual capacity to make sense of unfathomably complex issues, the imaginative powers to craft a vision of the future that generates enthusiasm, the operational know-how to translate strategy into concrete plans, [and] the interpersonal skills to foster commitment to undertakings that could cost people’s jobs should they fail.
This person, he goes on to observe, does not exist. All leaders are incomplete. Thus did Stanley McChrystal similarly argue in Team of Teams: New Rules of Engagement for a Complex World (Portfolio, 2015) that big data and quantum computing and everything else will not save us. In a complex—not merely complicated—world, “adaptability, not efficiency, must become our central competency” (p. 20). Humans do adaptability surprisingly well, at least in comparison to most robots I’ve yet met. Artificial intelligences are similarly incomplete, so managing their failings will increasingly require artistry. As the panelists of the Defense Science Board wrote in their 2015 Summer Study,Autonomy, there’s a practical application to how those managing military-technological development programs handle the ambiguity inherent in over-complication:
DoD’s strong separation between developmental testing and operational testing is in conflict with the best known methods for managing the development of such software. Operators will have to change their mindset from expecting weapon systems that “just work” out of the box to systems that require their time and effort into shaping their ever- evolving instantiation, but will ultimately be better customized to their mission, style, and behaviors. [p. 30]
For a new paradigm, they recommended “build, test, change, modify, test, change…” [p. 33]. They might have added rinse, repeat for good measure. For whatever comes after JSF Block 3F may not be merely uber-complex, but hideously complex. In his review of Arbesman’s book in the Wall Street Journal, Amir Alexander termed the problem “the rise of the kluges.” Fear not SkyNet then, because it couldn’t figure itself out. Just fear for schedule and budget.
James Hasík is a senior fellow at the Brent Scowcroft Center on International Security, where this first appeared.
No comments:
Post a Comment