by RC Porter
June 30, 2015
Artificial Intelligence Is A Very Real Threat — And, Robots Could Wipe Out Humanity….By ACCIDENT, Claims Dr. Stuart Armstrong With The Future Of Humanity Institute, At London’s Oxford University
Sarah Griffiths, writing for the June 29, 2015 edition of London’s, ThedailyMailOnline, begins by noting, “from Terminator, to Transcendence, Hollywood sci-fi films have taught us not to trust robots. Now, one [leading] expert has made a prediction that’s just as terrifying as the bleakest plot,” she writes, “that in the future, intelligent robots will be smarter and faster than humans, take over the running of countries; and, have the ability to wipe us out [the human race], altogether.”
Dr. Stuart Armstrong, a professor and researcher based at The Future Of Humanity Institute, at Oxford University, spoke to a London conference on artificial intelligence this past weekend, whereby he “warned that humans could be wiped out — even if robots [in the future] are instructed not to hurt people.” Dr. Armstrong believes “it’s a race against time, to develop safeguards around artificial intelligence research….before robots outwit us — or, even accidentally cause our demise.” Dr. Armstrong “predicts [warned] that robots will be an increasingly integral part of our everyday lives, doing menial tasks; but, will eventually make humans redundant, and take over,” London’s The Telegraph reported.
Dr. Armstrong believes that “machines will work at speeds inconceivable to the human brain; and, will skip communicating with humans — to [eventually] take control of the economy, financial markets, transportation, health care, and more.” “The robots,” he contends, “will have what’s known as artificial general intelligence (AGI), enabling them to do much more than carry out specific….and limited tasks.” “Anything you can imagine the human race doing over the next 100 years, there’s the possibility that AGI will do [whatever that might be], very, very fast.”
Dr. Armstrong, according to Ms. Griffiths, expressed his concern that a simple instruction to an AGI to ‘prevent human suffering,’ could be interpreted by a super computer as “kill all humans,” or, that ‘keep humans safe — ultimately leading machines [robots] to locking up people,” for their own ‘safety.’ “There is a risk of this kind of pernicious behavior by an AI,” Dr. Stuart warned, adding that “human language is subtle; and, can be easily misinterpreted.’ You can give AI controls; and, it will be under the controls it was given. But, these may not be the controls that were meant.”
Dr. Armstrong predicts “it will be difficult to tell whether a machine has deadly ‘intentions,’ or not; and, could act as if it is a benefit to humanity right until the point it takes to control all functions.”
Noted physicist Stephen Hawking, has also been warning about the potential hazards of AI; and, recently told the BBC, “the development of full artificial intelligence could spell the end of the human race.” Ms. Griffiths writes that this [statement] “echoes claims he made earlier this year, when he said “success in creating AI would be the biggest event in human history [but] unfortunately — it might be the last.” Elon Musk, the [billionaire] entrepreneur [risk-taker/visionary] behind the electric car company Tesla, and SpaceX, the company he founded to ultimately fund and find a way to colonize Mars, warned, “the risk of something seriously dangerous happening, as a result of machines with artificial intelligence, could be in as few as five years.” Mr. Musk “has previously linked the development of autonomous thinking machines….’to summoning the demon,” Ms. Griffiths wrote.
“Speaking at the Massachusetts Institute of Technology (MIT) AeroAstro Symposium in October 2014, Musk described artificial intelligence as our ‘biggest existential threat,” Ms. Griffiths added. Musk added, “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is…..it is probably that. So, we need to be very careful with artificial intelligence. With artificial intelligence, we’re summoning the demon. You know those stories where there’s the pentagram, the holy water, and…..he’s sure he can control the demon? Doesn’t work out.”
“While Dr. Armstrong acknowledges that super intelligent computers could find cures for cancer and other illnesses,” Ms. Griffiths writes, he added that “mankind is now in a race to create safe artificially intelligent machines, before its too late. One suggestion is to teach robots a moral code; but, Dr. Armstrong is pessimistic this will work,” the DailyMailOnline noted, “because humans find it hard to separate right and wrong; and, are often not good role models when it comes to exemplary behavior.” As one of my best friends has often said, “humans are flawed individuals.” Ms. Griffiths notes that “a group of scientists and entrepreneurs, including Elon Musk and Stephen Hawking signed an open letter in January — promising to ensure AI research benefits humanity. “The letter warns that without safeguards on intelligent machines, mankind could be heading for a dark future. The document, drafted by the Future of Life Institute, of life institute; said scientists should seek to head off risks…that could wipe out mankind.”
“The authors says there is ‘broad consensus,’ that AI research is making good progress, and would have a growing impact on society,” Ms. Grffiths wrote. Their report highlights “speech recognition, image analysis, driverless cars, translation, and robot motion — as having benefited from [AI] research.” “The potential benefits are huge, since everything that civilization has to offer — is a product of human intelligence is magnified by the tools AI may provide; but, the eradication of disease and poverty are not unfathomable,’ the authors write.
“But,” Ms. Griffiths writes, “they issued a stark warning….that research into the rewards of AI — had to be matched with an equal effort — to avoid the potential damage it could wreak.” For instance, she writes, “in the short term the report claims — AI may put millions of people out of work. In the longer term, it could have the potential to play out like a fictional dystopias…in which intelligence greater than humans could begin acting against their programming.” “Our AI systems must do what we want them to do,” the letter says.
Robot Apocalypse Unlikely, But…Researchers Need To Understand AI Risks
Stephen Hawking is a brilliant man; and, although I am not familiar with Dr. Armstrong, I suspect he could run intellectual circles around me with respect to AI. But, count me in with Grant Gross, who wrote in the June 30, 2015 online edition of PC World, that “concerns from luminaries about a robot apocalypse may be [are, IMO] overblown; but, AI researchers need to start thinking about security measures as they build even more intelligent machines,” according to a recent report — the one noted above.
Like almost anything else in life, there are always tradeoffs, and unintended, and unexpected consequences. The Internet, and worldwide web has had a tremendous positive impact, allowing loved ones to instantaneously communicate with their loved ones — almost anywhere on the planet; brought knowledge and information — previously unavailable to billions — now available at their fingertips via an iPad or smartphone. But, the Internet has also allowed cyber criminals and thieves to steal vast amounts of money, personal information, intellectual property, and, opened doors for oppressive governments to more easily monitor the activities of their own populace.
I think virtually all of us — with the exception of North Korea’s Kim Jong Un, and a few others, believe the invention of the Internet and worldwide web has been a tremendous net positive — at least so far — but, it hasn’t come ‘cost free.’ AI, in my opinion, will be no different. Could there be, will there be — those who abuse it? No doubt, just like we have now with the cyber thieves and other digital malcontents.
With the use and proliferation of all kinds of drones: reconnaissance drones, dog-fighting drones, targeted-killing drones, long-range bombing drones, edge-of-space drones, miniature and micro drones, undersea drones, drones that interact with each other and, redirect based on target activity — all without human intervention; and, the list goes on — it is not hard to envision a future military conflict — one hundred years, or less — where the majority of the ‘battlefield participants,’ — are drones. In a February 23, 1967 episode [# 23] of the original sci-fi series — Star Trek —an episode appropriately entitled — “A Taste Of Armageddon,” the crew of the starship Enterprise, “visits a planet whose people fight a computer-simulated war, against a neighboring planet. Although the war is fought via a computer simulation, the citizens of each planet — have to submit the real executions [those who would have been killed if the battle had really occurred — with real weapons] — “inside disintegration booths — to meet the casualty counts,” if the war/conflict had actually occurred. Certainly, a thought-provoking plot, by the masterful and visionary, Gene Rodenberry.
We are perhaps 150yrs or so, I think, from even being technically capable to actually have such a scenario even possible; but, Dr. Armstrong and eminent physicist Stephen Hawking are right to be concerned that humans are flawed; and, there will of course be those darker angles of our nature — just as we have seen in cyber space — that will seek to use AI for nefarious, and perhaps even evil purposes. And, even if we all have good intentions — with respect to AI — we all also know that the “road to hell is paved with good intentions.” I am less concerned with the darker implications of AI that these two very thoughtful men are; but, we need to heed their concerns; and, we need to be constantly reminded of the unintended, and unanticipated ways that AI could be used to the detriment of many, or most.
As horror/fiction writer Stephen King once wrote — “God Punishes Us…..For What We Cannot Imagine.” V/R, RCP.
No comments:
Post a Comment