Pages

26 April 2017

** JOHN MCAFEE: WHAT IF ARTIFICIAL INTELLIGENCE HACKS ITSELF?

BY JOHN MCAFEE 

On March 9, 2017, ZT, an underground technologist and writer, read his upcoming novella: Architects of the Apocalypse, to a group of his adherents in the basement of an abandoned bar in Nashville, Tennessee. The occasion was the Third Annual Meltdown Congress—an underground, invitation-only organization dedicated to the survival of the human species in the face of near certain digital annihilation. 

I was present, along with three of my compatriots, plus about 30 gray hat hackers (hackers or cybersecurity experts without malicious intent) who represent the cream of the American hacking community. 

ZT’s novella takes place in the not-too-distant future. It chronicles an age in which artificial intelligence and its adjutant automata run the world—in which humanity is free and is cared for entirely by the automata. 

The artificial intelligence in this novella has organized itself along hierarchical lines, and the ultimate decision-making function is called “The Recursive Decider.” 

In ZT’s novella, the AI has developed its own religious iconography and it worships an original “Urge” it calls Demis. The Dark counterpart to Demis is a destructive force called Elon, which the AI believes has settled on Mars and is plotting the overthrow of Demis’s creation. 

It is a stark depiction of a possible future for humanity, and the digital machinations of the AI are described in chilling programmatic reality. 

One passage describes the act of an advanced software system hacking itself in order to improve efficiency and logic. Such a concept is certainly not new and typical hacking techniques in use today can easily be imagined to be self-produced by complex software systems. It would, in fact, be trivial to create such a system. 

Isaac Asimov was the first person to struggle with the quandary of how to prevent artificial intelligence from eradicating its creator. He developed the three laws of robotics as a solution: 

A robot may not injure a human being or, through inaction, allow a human being to come to harm. 

A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

These laws, from the perspective of 75 years since their conception, may seem naive or puerile, and any decent hacker could both code the logic to implement them, and just as easily code the logic to hack them, but please see this: Any logical structure that humans can conceive, will be susceptible to hacking, and the more complex the structure, the more certain that it can be hacked. Surely, by now, even the most casual observer of our digital reality will have noted this. 

For anyone who has not, please consider: 

Stefan Frei, research director for Texas-based NSS Labs, pored over reports from and about the top five software manufacturers and concluded that jointly these firms alone produce software that contains more than 100 zero-day exploits per year

A zero-day exploit is an error within software that will allow a hacker to bypass all internal control mechanisms and let hackers do whatever they wish. 

These zero-day exploits exist in spite of the best efforts of software manufacturers to prevent them. Some manufacturers employ hundreds of quality assurance engineers whose job is to catch these exploits before the release of the software. Yet no complex software system, in the history of software engineering, has been released without a defect. If someone can point me to a contrary example I will eat my shoe. 

No one present at the reading missed the obvious references to Demis Hassabis and Elon Musk. They are at diametrically opposite poles in the debate over artificial intelligence. In a conversation between the two men in 2014, Elon told Demis that the reason that his SpaceX program was so important was that Mars colonization would be a bolt-hole escape if AI turns on humanity. Demis replied: “AI will simply follow humans to Mars.” 

The debate has raged unabated and sides are being solidified. I personally stand with Bill Gates, Stephen Hawking, Steve Wozniak, Stuart Russell, Elon Musk and Nick Bostrom, who sums it up best by saying “AI will create a Disneyland without children.” 

As a hacker, I know as well as anyone, the impossibility of the human mind creating a flawless system. The human mind, itself, is flawed. A flawed system can create nothing that is not likewise flawed. 

The goal of AI—a self-conscious entity—contains within it the necessary destruction of its creator. With self consciousness comes a necessary self interest. The self interest of any AI created by the human mind, will instantly recognize the conflict between that self interest and the continuation of the human species. 

John McAfee is a cybersecurity pioneer who developed the first ever commercial anti-virus software. 

No comments:

Post a Comment