Pages

9 January 2015

The Military’s New Year’s Resolution for Artificial Intelligence





The memo calls for a new study that would “identify the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all war-fighting domains…Emphasis will be given to exploration of the bounds-both technological and social-that limit the use of autonomy across a wide range of military operations. The study will ask questions such as: What activities cannot today be performed autonomously? When is human intervention required? What limits the use of autonomy? How might we overcome those limits and expand the use of autonomy in the near term as well as over the next 2 decades?” 

A Defense Department official very close to the effort framed the request more simply. “We want a real roadmap for autonomy” he told Defense One. What does that mean, and how would a “real roadmap” influence decision-making in the years ahead? One outcome of the Defense Science Board 2015 Summer Study on Autonomy, assuming the results are eventually made public, is that the report’s findings could refute or confirm some of our worst fears about the future of artificial intelligence. 

2014: The Year the Smart People Freaked Out About AI 

In the event that robots one day attempt to destroy humanity, 2014 will be remembered as the year that two of technology’s great geek heroes, Elon Musk and Stephen Hawking, predicted it would happen. And if that never comes to pass, 2014 will go down as the year two of the world’s smartest people had a media panic attack about robots for no reason. 

In August, Musk tweeted that artificial intelligence could be more dangerous than nuclear weapons and in October, likened it to “summoning a demon.” Hawking, meanwhile, told the BBC in December that humanistic artificial intelligence could “spell the end of the human race.” The context for the claim was a discussion of the AI aide that helps Hawking to speak despite the theoretical physicist’s crippling ALS.
The statements surprised many as they seemed to rise from thin air. After all, 2014 was not a year in which artificial intelligence killed anyone or even really made headlines. A few thousand more people encountered Apple’s AI administrative assistant program for the iPhone, Siri and, despite improvements, found the experience frustrating and disappointing. (It’s no wonder that fewer than 15 percent of iPhone owners have ever even used Siri). IBM searched for new applications for Watson beyond winning quiz shows. Computers continued to beat humans at chess and continued to not understand chess in any remotely human way — not why we play, not why we sometimes quit, not the significance of chess in Ingmar Bergman’s masterpiece the Seventh Seal, nada. When a computer finally passed the Turing Test, a commonly cited measure for strong artificial intelligence, the response from many in technology community, after some gleeful reposting, was rejection of Turing Test as a useful metric for measuring humanistic AI. 

The route to a humanistic artificial brain is as murky as ever. Inventor and Google director of engineering Ray Kurzweil has suggested that it will only be possible only after humanity creates a map of the human brain accurate to the sub-cellular level, a prize that seems far off.

Elon Musk’s freakout was prompted not by any technological breakthrough but by philosopher Nick Bostrom’s book titled Super Intelligence (Oxford 2014). 

It’s a remarkable read for many reasons, but principally, it offers a deep exploration of a threat for which there is no precedence or any real world example in the present day. It is a text of brilliant speculation rather than observation. Here’s how Bostrom describes the rise of malevolent super-intelligence in chapter six, evolving from a limited AI program, somewhat like Siri, but one capable of recursive learning. 

Now when the AI improves itself, it improves the thing that does the improving. An intelligence explosion results— a rapid cascade of recursive self-improvement cycles causing the AI’s capability to soar. (We can thus think of this phase as the takeoff that occurs just after the AI reaches the crossover point, assuming the intelligence gain during this part of the takeoff is explosive and driven by the application of the AI’s own optimization power.) The AI develops the intelligence amplification superpower. This superpower enables the AI to develop all the other superpowers detailed in Table 8. At the end of the recursive self-improvement phase, the system is strongly super-intelligent. 

The book carries on like this. It reads almost like an Icelandic Saga, but rather than filling in the gaps of history with imaginative tales of heroic exploit, it offers a myth of the future, one told not in verse but in the language of an instruction manual. It presents a logical argument for the inevitability of super-intelligence but no proof, nor any clear evidence that it has or will happen, because, of course, none exists. 

In response to this year’s AI panic, Rodney Brooks, the roboticist behind not only the popular Roomba robot vacuum cleaner but also bomb disposal system, the PackBot, was quick to rebut the notion of malevolent AI as any sort of serious threat. 

Brooks is an experimental roboticist, sometimes called a scruffy, someone willing to take every manner of device, sensor, computer program and apply it to the goal of achieving a slightly better result. In talks, he will frequently point out that the first Stanford self-driving vehicle, a cart, took six hours to traverse a mere 20 meters. It was through a great deal of slow, painful and incremental research that, in 2005, researchers from Stanford were able to reveal a car that could travel 132 miles in about the same amount of time. Brooks is well-aware that artificial intelligence lends itself of to terrifying caricature. For Brooks, decades of difficult experimentation informs an outlook that is very different from that of Musk, Bostrom or even Hawking. 

“I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years,” he recently wrote on the blog of his newest company, Rethink Robotics. “I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.” 

Let’s take Brooks’s position that a “sentient volitional intelligence” – which in English means something like human thinking and will— is impossible in the near term. Does artificial intelligence still pose any sort of actual threat to humanity? Bio-ethicist Wendell Wallach says yes. 

In his book Moral Machines, Wallach, with co-author Colin Allen, argues convincingly that a robotic intelligence need not be “super” or even particularly smart in order to be extremely dangerous. It needs only to have the authority, autonomy if you will, to make extremely important, life or death, decisions. 

“Within the next few years, we predict there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight,” the authors write. “Already, in October 2007, a semiautonomous robotic cannon deployed by the South African army malfunctioned, killing 9 soldiers and wounding others… although early reports conflicted about whether it was a software or hardware malfunction. The potential for an even bigger disaster will increase as such machines become more fully autonomous.” 

Wallach, unlike, Bostrom, does not look toward a future where humanity is locked in conflict with Skynet. Machines, software, robotic systems cause loss of life not because they have developed will but because they lack it, are incredibly stupid, poorly designed are both. But it’s a future where humanity has outsourced more and more key decisions to machines that are not nearly as intelligent as are people. 

The distinction between malevolent AI and dumb and dangerous is important because while there is no clear evidence that super-intelligence is even possible, humans are leaving ever more important decisions in the hands of software. The robotic takeover of the human decision space is incremental, inevitable and proceeds not at the insistence of the robots but at ours. 

Nowhere is this more obvious than in the United States military, the institution that effectively created the first random access memory electronic computer and everything that has followed from that, including modern robotics. Faced with rising costs for staffing, a public increasingly averse to casualties but a growing number of commitments and crises to contend with, military research into artificial intelligence—in autonomy—touches everything from flying jets to administering healthcare.

Consider the automatic piloting features of the current version of the F-35, the military’s joint striker fighter, the most expensive aircraft in history, in part, because it’s loaded with a lots of sophisticated software to take over more and more human pilot responsibilities. In November, the Navy ran it through a battery of tests. While the Pentagon hasn’t released data on those tests yet, pilots that took part in an exercise to land an F-35 on the deck of an aircraft carrier reviewed the experience positively. “It makes landing on the boat a routine task,” Cmdr. Tony “Brick” Wilson told U-T San Diego writer Jeanette Steele

Earlier this year, the Defense Advanced Research Projects Agency, or DARPA, put out a proposal for a system, called the Aircrew Labor In-Cockpit Automation System, to effectively automate most of the piloting of an aircraft, in order to “reduce pilot workload,” according to the agency. Even those planes that are piloted are becoming less so. 

Then, of course, there are unmanned systems, which usually require a two-person team, at least. But that’s rapidly changing. The high-tech, largely classified RQ-180, developed by Northrup Grumman, will show off new more autonomous features, in addition to stealth capabilities unprecedented in a UAV when it becomes operational. It’s currently in testing. 

“The next generation of UAVs will need to be much more capable—faster, with greater autonomy in case communication links are disrupted, and stealthier so they are more difficult for an adversary to detect,” defense analyst Phil Finnegan of the Teal Group told Popular Science writer Eric Adams

Perhaps the most important factor contributing to far more autonomous military machines is cyber-vulnerability. Any machine that must remain in constant communication with an operator—even when that communication is encrypted—is more hackable than a system that doesn’t require constant contact to perform basic functions. A number of high profile cyber-breaches made that very obvious in 2014. During the Black Hat conference in Las Vegas, the firm IOActive demonstrated, live, that backdoors had compromised a number of key pieces of common military communications equipment. Even encrypted data can still give away information about the sender or receiver that could be important or exploitable. 

“If you can’t ensure stable connectivity it makes the push for more advanced robotics more difficult to imagine unless you take letting the robots think for themselves more seriously…because the solution to some of those issues could be autonomy,” Michael Horowitz, an associate professor of political science at the University of Pennsylvania, remarked at the Defense One Summit in November. 

It’s no coincidence that 2014 saw several key, Defense Department announcements regarding the electromagnetic spectrum, in particular the release of a much-anticipated spectrum strategy framework. What’s becoming increasingly clear is that waning U.S. dominance in the particle space where electronic communication happens makes communication-dependent systems more vulnerable. 

“Communication with drones can be jammed… that creates a push for more autonomy in the weapon,” futurist and technologist Ramez Naam said during the Defense One summit. “We will see a vast increase in how many of our weapons will be automated in some way.” 

Greater autonomy does not necessarily mean the ability to shoot at (robotic) will with no human issuing the actual command. When you talk to drone and robotics experts inside the Pentagon about the prospect of killer robots, they’ll often roll their eyes and insist that the Defense Department has no plans to automate the delivery of what is often, euphemistically, referred to as “lethal effects.” 

It’s an attitude enshrined in current policy, a 2012 Defense Department directive that, as Defense One has observed previously, expressly prohibits the creation or use of unmanned systems to “select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.” The directive is signed by Defense Secretary Ashton Carter who was serving as deputy secretary of defense at the time. 

But a directive is very different from a law. There’s no real reason why some different defense secretary – or the same man, in Carter’s likely case – couldn’t issue a counter directive in the face of a new set of circumstances. And within that wording of the current directive, there’s a lot of room. Left open is the question of how or when to “engage” a target or “group.” 

Will we reach a point where it makes more sense to endow unmanned systems with the system authority to select their own targets? What about the ability to simply suggest a target to a human operator who is unrested, overburdened and possibly overseeing several drones at once? 

Of course, the United States military isn’t the only player building autonomy robotic systems, either weapons or consumer devices. 

“Even though today’s unmanned systems are ‘dumb’ in comparison to a human counterpart, strides are being made quickly to incorporate more automation at a faster pace than we’ve seen before,” Paul Bello, director of the cognitive science program at the Office of Naval Research told Defense One in May. “For example, Google’s self-driving cars are legal and in-use in several states at this point. As researchers, we are playing catch-up trying to figure out the ethical and legal implications. We do not want to be caught similarly flat-footed in any kind of military domain where lives are at stake.” 

In conversation with Defense One, the Pentagon official reiterated that point, that regardless of what the military does or does not build, the national security community has a big interest in understanding the possibilities and limitations of AI, especially as those will be tested by other nations, by corporations and by hobbyists. “You are absolutely right, it’s a concern,” he said. 

What level of vigilance should the rest of us adopt toward ever-smarter robots? It’s a question we’ll be asking well beyond 2015, but in the coming year within the guarded walls of the Pentagon, a real answer will begin to take form.

No comments:

Post a Comment