Pages

29 October 2018

Bill Gates Says We Shouldn’t Panic About Artificial Intelligence

Dom Galeon

Artificial intelligence (AI) is one of today’s hottest topics. In fact, it’s so hot that many of the tech industry’s heavyweights — Apple, Google, Amazon, Microsoft, etc. — have been investing huge sums of money to improve their machine-learning technologies. An ongoing debate rages on alongside all this AI development, and in one corner is SpaceX CEO and OpenAI co-chairman Elon Musk, who has been issuing repeated warnings about AI as a potential threat to humankind’s existence. Speaking to a group of U.S. governors a couple of months back, Musk again warned about the dangers of unregulated AI. This was criticized by those on the other side of the debate as “fear-mongering,” and Facebook founder and CEO Mark Zuckerberg explicitly called Musk out for it.

Now, Microsoft co-founder and billionaire philanthropist Bill Gates is sharing his opinion on Musk’s assertions.


In a rare joint interview with Microsoft’s current CEO Satya Nadella, Gates told WSJ. Magazine that the subject of AI is “a case where Elon and I disagree.” According to Gates, “The so-called control problem that Elon is worried about isn’t something that people should feel is imminent. We shouldn’t panic about it.”
Fear Of AI?

While the perks of AI are rather obvious — optimized processes, autonomous vehicles, and generally smarter machines — Musk is simply pointing out the other side of the coin. With some nations intent on developing autonomous weapons systems, irresponsible AI development has an undeniable potential for destruction. Musk’s strong language may make him sound like he’s overreacting, but is he?

As he’s always been sure to point out, Musk isn’t against AI. All he’s advocating is informed policy-making to ensure that these potential dangers don’t get in the way of the benefits AI can deliver.

In that, Musk isn’t alone. Not all experts think his warnings are farfetched, and several have joined Musk in sending an open-letter to the United Nations about the need for clear policies to govern AI. Even before that, other groups of AI experts had called for the same.

Judging by what Nadella told the WSJ. Magazine, much of this conflict may actually be mostly imagined. “The core AI principle that guides us at this stage is: How do we bet on humans and enhance their capability? There are still a lot of design decisions that get made, even in a self-learning system, that humans can be accountable for,” he said.

“There’s a lot I think we can do to shape our own future instead of thinking, ‘This is just going to happen to us’,” Nadella added. “Control is a choice. We should try to keep that control.”

In the end, it’s not so much AI itself that we should watch out for. It’s how human beings use it. The enemy here is not technology. It’s recklessness.

No comments:

Post a Comment