It is not unusual for disrupting technologies to be embraced and feared—and not necessarily in that order. That was and will continue to be true for all technologies that bring both benefit and risk; it is a duality in which many technologies have to exist. Examples throughout history have been the airplane, the automobile, unmanned weapons systems, and now even software – especially the software which powers artificial intelligence (AI).
Last week at a U.S. governors’ conference, Elon Musk, the CEO of the engineering companies SpaceX and Tesla, reportedly told the assembled politicians that “AI is a fundamental existential risk for human civilization,” sounding the alarm bell. This is not the first time Musk has expressed this concern, he’s done so as early as 2014. Many have branded him a Cassandra, and if he is, he’s not a lone-wolf Cassandra; he’s joined in those views by the likes of Stephen Hawking, Bill Gates, and other experts. It is not surprising there is an equal number of experts who question Musk’s concern and believe his alarm bell is tolling for a non-existent threat.
I am not an expert in AI, nor am I an AI practitioner; I’m more of a national security and intelligence philosopher. Therefore, I am not sure I want to debate a man who is one of the foremost entrepreneurs and risk-takers of his time. That said, it may prove useful to unpack the issues a bit and also discuss what we find in the context of AI’s use in defense of the nation.
Musk’s argument is intentionally vague because neither he, nor Gates, nor Hawking can predict the future. Thus, their argument is an emotional one at its core – for example, he’s previously on record as likening AI developers as devil summoners. One could argue the power of their arguments is from their own notoriety as opposed to the technical merits of their cases. What Musk says plays upon conscious and unconscious biases and phobias.
He outlines a dystopian future, where machines have replaced humans in the workplace through the elimination of traditional laborers, and where, in the most extreme outcome, the machines have become self-aware and thinking on a level at or beyond human cognition and possibly find human beings obstacles to machine hopes and dreams – think Skynet in the Terminator movies. He is also concerned that AI development would become the purview of secretive government or private sector projects or enterprises. He adds that this could trigger an uncontrolled AI “arms race,” which would not be in humankind’s best interests.
The biggest shortcoming in Musk’s argument is the future from which the threat will emanate is not just a long way off, but such a long way off that we may never achieve it. Neither Musk nor anyone else knows whether our future of machine-enabled humans or machines in a dominant role will be proportionally more of a threat than a benefit. Those who support Musk’s concern base their own apprehensions on the supposition that AI development will advance to a Kurzweilian point, where the machines are more human than humans and have cognitive abilities far beyond normal human perception. This, the argument goes, is where machines take over the world and possess a self-generated, malevolent consciousness and intent.
It is probably useful at this point to highlight the fact that we barely understand how our own consciousness is formed and structured. If we are beginning AI software development from this starting point of significant ignorance regarding the human brain, there is a high probability the AI we develop will never achieve near-human parity or come close to being self-aware.
There is little doubt that AI will allow machine-enabled human behavior, perform sensor-data-option selection to a highly refined degree, and enable us to plow through mountains of data so we can find that needle in a stack of needles. However, even a cursory review of the state-of-the-art of AI shows we are a very, very long way from AI that approaches human consciousness.
Where the human brain triumphs is where we are drawing conclusions and making judgements in extraordinarily ambiguous circumstances and in an environment of data uncertainty. It is interesting to speculate had there been a Muskian bell-ringer, who predicted at the time of Henry Ford that his invention would kill about 1.2 million people per annum, whether we would have had the automobile. In all fairness to Musk’s interest in government intervention, the city of Detroit had to intervene and impose protocols on drivers and thus the stripes on roads, stop signs, and stop lights came into being for the horseless carriage.
It is interesting that Musk and others have chosen to be dramatically concerned over an anticipated risk; a postulate which is broadly extrapolated not on the basis of fact but on the basis of instinctual fear. That said, Musk may be right in pointing out that even at this early stage, we should begin a serious debate on the potential gains versus risks of AI and make that an enduring debate as we achieve more and more progress.
He takes his suggestion further and recommends that governments intervene and impose an oversight regime in order to “control” the pace and content of AI’s progress in theory to maximize the gains and reduce the risks. One could make a similar argument where in the field of genetic engineering we may not have fully explored the ramifications and ethics of this extraordinarily beneficial technology, yet one with equally if not more potentially significant risks to mankind.
It appears that Musk, the inveterate and highly successful entrepreneur, may have been reluctant to suggest putting government oversight on technical innovation. Facebook founder Mark Zuckerberg, who has frequently chided Musk and is equally reluctant to solicit government intervention, would argue that “we didn’t rush to put rules into place about how airplanes should work before we figured out how they would fly.”
I’d like to address the other element of Musk’s (and others’) apprehensions over the risks of an AI arms race. From my perspective, there is no uncertainty over this. We are in an arms race even if unacknowledged. There are numerous acknowledgements of the unfortunate state of affairs where the United States no longer enjoys significant national defense technical superiority.
Both the Russians, and especially the Chinese, have stolen a significant portion of our national defense intellectual property. Like ours, their weapons will become more capable and require more sophisticated software to remain effective in war. Ultimately, this software will approach what is conventionally considered to be in the category of AI. Due to their perfidy, both Russia and China have a close approximation of our state of the art. Their weapons systems are becoming more sophisticated and more lethal. Whether they legitimately acquired this advanced capability by dint of hard work, investment, and innovation, or instead espionage becomes a distinction without a difference. The notion that well-intended nations would subordinate themselves to an international AI arms control regime is predicated on the notion that they are well intended. From my perspective, many of our adversaries are not that well intended.
Arms control worked well for nuclear weapons and their delivery systems principally because their development and deployment was relatively easy to detect and monitor. These systems had visual and non-visual signatures that made verification reasonably effective. How one adapts this kind of arms control structure to monitoring and controlling the efforts of a small team of AI developers, writing code which produces no signature, remains to be seen. We have no choice but to let our government laboratories and defense industries operate without interference while we start the debate about gains and risks we cannot even imagine.
No comments:
Post a Comment