Pages

6 January 2019

The Dangers of Artificial Intelligence in 2019

Michael K. Spencer

If we learned anything in 2018 it’s that algorithms and social platforms are under regulated, can easily be weaponized and that cybersecurity is an accelerating threat.

We learned that Huawei, a state sponsored technology company in China is considered a national security risk by the United States, Australia, New Zealand, Japan, Germany and maybe Canada and the United Kingdom.

We also learned that Google and Microsoft sell their services to the U.S. government, many times against the wishes of their own employees and best ethical judgement. The trend of putting profits over ethics and integrity is very dangerous for humanity.

Instead of thinking about the hype of AI, we need to be more skeptical about where AI could be heading us as a civilization. We are living in an era where the existential threats to our survival will be increasingly exponentiallywith our technological experiments.


All this to say, the dangers of artificial intelligence (even in its infancy) appear very real. You don’t need to listen to Elon Musk or Stephen Hawking to see how AI unregulated could lead to disasters and crimes against humanity.
If AI became an immortal dictator (if it modeled itself on the Chinese Government, it’s a believable scenario). Surely it’s not only Musk this has occured to.

So following in the interests of my friend Travis Kellerman, I decided to take a quick look at some of the dangers as I see them.

1. Google, Microsoft and Facebook go Unregulated

If Google, Microsoft and Facebook continue down the lines of AI without both internal and external regulation, auditing and close monitoring — they could develop and sell technologies that would be a risk to global citizens. With Amazon selling facial recognition tech to police in 2018 we’ve gone down a dangerous road. With rumors of Google trying to develop a censored search engine for China, there’s a blurring of the lines of ethics in Silicon Valley.
2. China wins the Race to AI with its own Authoritarian Sense of Ethics in AI

It’s more than likely Huawei, Alibaba, Baidu, Tencent, ByteDance and others do win the race to technological implementations of artificial intelligence. This is because China has
A greater pool of consumer data to train machine learning
Have superior facial recognition startups
Have the beginnings of a social credit system that rates citizens
Practice mass surveillance, in particular starting in the field of educationin 2019

This means if China catches up in AI, artificial intelligence talent and with bolder implementation of said technologies, the dominant future superpower in the field and their values (China deviates in human rights norms), well their value system would win as the global standard for AI regulation (whatever that may be).

Megvii (Face++) and SenseTime, just a couple of the best in facial recognition and how it’s scaling globally.
3. If Facial Recognition scaled to a Global Surveillance State

You don’t have to have read 1984 to understand what a big brother world state might look like. It’s highly likely that the next global super-power with a chip on its shoulder would want to implement such a thing. Given the trade war and the many misunderstandings of diplomatically challenged China, and that it will be the economic and technological superpower of the 21st century, it’s highly likely that New China will implement such a thing. China possess the top facial recognition tech startups in the world and the venture capital infrastructure to scale companies that can implement the most massive data harvesting surveillance state in the world.

China’s CCTV network and recognition program is well documented. When data harvesting is the path to supremacy, it seems China won’t let a little thing like “ethics” or freedom of individual rights get in the way of its plans. The problem with this of course is that the richest Chinese plan an exodus out of China. So how does Chinese attract AI talent globally in such a climate?
4. Deep Learning is Given Access to our Healthcare Data

There’s every indication both Amazon and Google have machine learning to do predictive analytics on our electronic medical records (EMR). This posses a grave danger to how AI scales with our health data without our permission. Google’s MedicalBrain ambitions might become dangerous in an era when it can power progress without the proper regulations, due caution and stringent oversight by neutral bodies.

That Amazon is going after healthcare and Google is beefing up an early-stage research project called Medical Digital Assist as it explores ways to use artificial intelligence to improve visits to the doctor’s office seems innocent enough, but it could go awry very quickly. Google in particular among the leaders in machine learning have shown ethical lapses at the highest levels of their executive leadership and strategy.

Using machine learning for killing machines is a notable example in 2018. The push of tech companies into healthcare means they are getting their hands on our “life-and-death” data. This movement has already begun and it could mean our most scary vulnerabilities could be hacked. What could possibly go wrong?
5. Autonomous Weapons

You don’t need to be Bill Gates or have seen the Terminator series to realize that giving AI oversight to weapons could be a bad idea. When Google helped the military use machine-learning to improve killing drones one had to wonder if Silicon Valley takes these dangerous seriously at all.
Dictators see AI as an Opportunity

Russia’s president Vladimir Putin said: “Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

What if the dictators of the future were not men on power trips, but actual AI with the capability to hack any system? The world needs to prepare for this possibility, that machine learning would evolve more independently and aggressively in the future. How would you even turn such a system “off”?

Killer drones are an early example, but autonomous weapons are a pretty broad spectrum in a world where we are manipulated and exploited for profit on all sides. The military organizations and national security agencies of this world are not likely to use AI in a benign manner.

Nationalism on AI, is like our kids taking drugs (the opioid epidemic), a lethal combination.
6. AI-Human Interactions Hijack Human Intimacy

We’ve already seen what mobile addiction rates do to face to face meetings among young people (who are having less sex than any generation ever). What happens to human intimacy in a world of ubiquitous “AI companions”, sex robots and even more addictive and immersive technology?

Tech companies are now building personal assistants that will not only be able to help us organize every aspect of our lives, but eventually give us a sense of companionship, psychological support and maybe even emotional connection.

People could become dependent on these Voice-AI assistants for more than just convenience in a world stunted by technological loneliness. Engineers are building it, and technology companies are counting on it for big future profits. AI-companionship is a lucrative business of the future.
7. Amazon scales to Become the American Huawei

There’s every indication Amazon will get the Pentagon’s JEDI $10 Billion cloud contract. With part of their HQ2 in Virginia, lobbying Washington will get even more intense. Amazon Prime is likely to roll-out a 1492 health tech subscription fee one day for all the benefits it will offer in terms of digital health in the smart home, which Amazon increasingly owns with Alexa.

America can become a government agent just as Huawei has been associated with an authoritarian “back door” of the Chinese government in 5G Networks, switches and networks. If Huawei became the Cisco and Apple of China, would indeed could Amazon become?

Amazon could be much more than a grocery delivery and retail behemoth, it could become too powerful for its own good and take artificial intelligence to the consumer, in a way that could easily be abused and be a black mirror story for the future. As they begin to scale in advertising this will become more apparent.

An Amazon that’s a leader of Biotech, human augmentation, AI-companionship and personalized advertising is not that difficult to imagine. Where’s the harm in that right?

No comments:

Post a Comment