By Yuval Noah Harari
The race to develop artificial intelligence (AI) is gathering momentum, and as the United States and China pull ahead, other countries, especially in the developing world, are lagging far behind. If they don’t catch up, their economic and political prospects will be grim.
For those countries at the back of the pack, the economic challenges will be hard enough: In an automated world, there will be far less demand for the unskilled labor they’ve typically provided. But the political dangers will be equally daunting. AI already makes it possible to hack human beings—to collect data about individuals and then use it to decipher, predict, and manipulate their desires. For example, reporting by a number of newspapers revealed that Cambridge Analytica had done just that with American voters’ Facebook data.
All countries, regardless of whether they are tech superpowers or not, will feel the effects of the AI revolution.
All countries, regardless of whether they are tech superpowers or not, will feel the effects of the AI revolution.But there’s an added challenge for those left behind in the race. To hack humans, governments and corporations need access to enormous amounts of information about real-life human behavior, which makes data perhaps the most important resource in the world. But most of the world’s data is mined by the United States, China, and companies based there.
If this trend continues, the world could soon witness a new kind of colonialism—data colonialism—in which raw information is mined in numerous countries, processed mainly in the imperial hub, and then used to exercise control throughout the world. For example, data giants in San Francisco or Shanghai could compile the entire medical and personal history of politicians and officials in distant countries and use it to influence them or manipulate public opinion about them.
Beyond that, those who control the data could eventually reshape not only the world’s economic and political future but also the future of life itself. The combination of AI and biotechnology will be critical for any future attempts to redesign bodies, brains, and minds. Elites in the United States and China who have access to those technologies could determine the course of evolution for everyone, according to their particular values and interests. Abilities they deem useful, such as discipline and rote intelligence, might be enhanced at the cost of attributes believed to be superfluous, such as spirituality.
Those left behind in the race to hack humans have two options: join or regulate.
It is unlikely that smaller countries will be able to single-handedly produce their own Google or Baidu. A joint effort by the 28 members of the European Union or by Latin America’s Southern Cone countries, however, might succeed. To increase their chances of doing so, they could focus on areas that the front-runners have so far neglected. Until now, the development of AI has focused on systems that enable corporations and governments to monitor individuals. Yet the world needs the opposite, too: ways for individuals to monitor corporations and governments. By building improved tools to fight corruption or address police brutality, for example, latecomers to the race could carve out a niche for themselves and also become a check on the data superpowers.
Alternatively, countries that can’t compete with the AI front-runners can at least try to regulate the race. They can lead initiatives to build tough legal regimes around the most dangerous emerging technologies, such as autonomous weapon systems or enhanced superhumans. And much as countries create laws to protect their own natural resources, they can start to do the same for their data. International mining companies have to pay something to the countries where they dig up iron ore, and the same should go for tech companies collecting data.
This is particularly true when mining that data might cause harm to the local population. For example, a crucial stage in the process of developing autonomous vehicles involves allowing them to drive under real-life conditions, collecting data on the mishaps, and then using this data to perfect the technology. Developed countries have already placed strict limitations on autonomous vehicles—which will likely last until those vehicles’ safety is guaranteed—and so corporations might be tempted to begin testing the technology in developing countries where regulations are laxer and where fatal accidents would raise fewer eyebrows. Something similar might happen with medical data, which could be mined on the cheap in developing countries with weak privacy laws but then collected and processed in the AI hub, which would reap most of the benefits of the research.
It is not too soon for the countries that provide crucial data to start demanding better returns. They could create an organization of data-exporting countries, for example, that would vastly expand their leverage over the world’s Amazons and Alibabas. And if they start sharing in the profits of data collection, they would have some means for coping with the economic shocks that will come as robots replace textile workers and truck drivers.
It is far from certain that the world’s weaker states can avoid being data-colonized. But they have to try. If they bury their heads in the ground, focus on their immediate problems, and ignore the AI race, their fate will be decided in their absence.
No comments:
Post a Comment