AUGUST 9, 2015
Next year’s Cyber Grand Challenge event will pit humans against machines in a grand hacking war. DEF CON’s war gamers like their chances.
Patrick Tucker is technology editor for Defense One. He’s also the author of The Naked Future: What Happens in a World That Anticipates Your Every Move? (Current, 2014). Previously, Tucker was deputy editor for The Futurist for nine years. Tucker has written about emerging technology in Slate, The ...Full Bio
LAS VEGAS, Nev.— Every year, thousands of information-security specialists, computer scientists, and few mohawked geeks who proudly wear the moniker of hacker gather here for a very particular digital war game:, the DEFCON capture- the-flag, or CTF, competition. To win, you have to find weaknesses in other teams’ defenses, steal their data flags, and protect your own.
But next year, it won’t just be humans squaring off. In addition to the regularDEF CON CTF event, the 2016 meeting will pit seven teams’ robotic hackers against each other in an AI capture-the-flag contest. Then humans will take on the robots.
The robot-vs.-robot battle is part of the Defense Advanced Research Projects Agency’s Cyber Grand Challenge series of competitions. (DARPA is not involved with the robots-vs.-humans competition, although some teams may participate in both, agency spokesman Jared Adams said.)
The arrival of an AI system that can outflank humans in breaching security and protecting data in a dynamic game environment would be a force multiplier for defensive cyber security and even offensive cyber warfare. But will war in a machine environment necessarily favor the machines? Not according to many of the hackers at this year’s DEF CON. Everyone who talked to Defense One about next year’s competition were confident that it would be years before a robot team would beat human hackers at their own game.
Cyber Grand Challenge program manager Michael Walker laid out why it’s a better test for artificial intelligence than many other game scenarios, like chess or checkers. “You have to do binary reverse engineering the entire time,” he said, referring to the practice of dissecting and reconstructing program files. “The only way to figure out how the software works is to reverse…and do it as fast as you can while your opponents are trying to the same over you,” Walker said. “To even explore the state space, I have to be able to synthesize logic.”
Robot hackers also have to be able to exhibit some very humanistic behaviors — skepticism, creativity, and even the ability to bluff — gray areas that get machines into trouble in games that aren’t perfectly straight forward. It’s one reason why computers that can dominate at chess get into trouble when the game requires what might be called instinct, like poker. “If machines can’t win at go, can’t win at poker, do they have a chance at all? That’s exactly what we’re talking about,” Walker said. (For more on the Cyber Grand Challenge, check out his Reddit AMA session from last year, or his appearance on “60 Minutes”.)
But if one of the seven robot teams wins, will it signal the end of the era of human hacking in the same way that the self-driving cars foretell the end of human driving? Well, not quite. The Cyber Grand Challenge won’t be the free- for-all that is the regular CTF. It will take place within DARPA’s DECREEoperating system, released as open source last year. DECREE has seven system call types, or syscalls, ways a user can talk to the operating system’s input/ output manager. In the context of information security, syscalls are tools you can use for attack. Because the DARPA CTF will be limited to seven syscalls, it will be a rather more tame version of the regular DEF CON CTF, in which teams working in an X86 environment might use 200 syscalls.
This all means is that the contest will be more of a boxing match and less of a street brawl.
So do the hackers think a robot is going to beat them? “Absolutely not,” said one, who declined to be named but is a self-described hacker who was providing technical support to the DEF CON CTF this year. “There are classes of challenges that will always be outside of the capabilities of machines,” he said. “CGC is primarily focused on memory corruption vulnerabilities. That doesn’t include classes of bugs that are logic errors which are ridiculously difficult to detect autonomously. Like, how do you tell if something is intentional behavior, a back door, or a programming mistake?”
Ryan Grandgenett, an information assurance researcher at the University of Nebraska, agreed that humans would probably beat out machines for the foreseeable future. “I know that Google has made some pretty big advancements in chatbots that look like humans, but I don’t know about something this complex,” he said.
Added Cmdr. Michael Bilzor, an instructor at the United States Naval Academy, “Finding exploits is so much an art form right now. Particularly because the large space of operating systems.”
Not everyone was quite so pessimistic about the machine teams’ chances. One observer, who asked to be identified only as someone who had worked in a security operations center for a large university, said that he was impressed by the DARPAtalk, and estimated that a machine would beat a human at seven to ten years from now. “If capture-the-flag is a number of flags in a time limit, a computer is going to have an advantage,” he said.
And Bilzor said the terms of the fight mean that it’s no real contest at all. After all, in an actual battle setting, no hacker would limit the types of strikes or holds (syscalls) that they could use. “The only way to get the automated systems to play is to constrain the problem, which they’ve done.” he said. “If you’re talking about full spectrum vulnerability identification and exploit generation on any architecture, using any operation base and any syscall set? You’re probably talking at least a decade, in my opinion,” he said.
All trash talk aside, the DEF CON attendees were broadly appreciative of the DARPA effort and all the new open-source tools, like DECREE, that the agency has released for it. Overall, it’s already been a PR win for the agency, unlike the recent Robotics Grand Challenge event, which produced, primarily, laugh reels of robots falling down.
The hackers just don’t think you can automate exploit fencing in a way that will threaten their livelihoods any time soon. Hear that, robots? The gauntlet has been thrown.
No comments:
Post a Comment