By Adrian Pecotic
The race for advanced artificial intelligence has already started. A few weeks ago, U.S. President Donald Trump signed an executive order creating the “American AI Initiative,” with which the United States joined other major countries pursuing national strategies for developing AI. China released its “New Generation Plan” in 2017, outlining its strategy to lead the world in AI by 2030. Months after that announcement, Russian President Vladimir Putin declared, “Whoever becomes the leader in this sphere will become the ruler of the world.”
Russian President Vladimir Putin declared, “Whoever becomes the leader in this sphere will become the ruler of the world.”
But it’s less clear how much AI will advance, exactly. It may only be able to perform fairly menial tasks like classifying photographs, driving, or bookkeeping. There’s also a distinct possibility that AI will become as smart as humans or more so, able to make complex decisions independently. A race toward a technology with such a range of possible final states, stretching from banal to terrifying, is inherently unstable. A research program directed toward one understanding of AI may prove to have been misdirected after years of work. Alternatively, a plan to focus on small and achievable advances could be leapfrogged by a more ambitious effort.
China, the United States, and Russia are each negotiating this fraught landscape differently, in ways responsive to their unique economic and military situations. Governments are motivated to pursue leadership in AI by the promise of gaining a strategic advantage. At this early stage, it’s tough to tell what sort of advantage is at stake, because we don’t know what sort of thing AI will turn out to be. Since AI is a technology, it’s natural to think of it as a mere resource that can assist in attaining one’s goals, perhaps by allowing drones to fly without supervision or increasing the efficiency of supply chains.
But computers could surpass humans in finding optimal ways of organizing and using resources. If so, they might become capable of making high-level strategic decisions. After all, there aren’t material limitations restricting the intelligence of algorithms, like those that restrict the speed of planes or range of rockets. Machines more intelligent than the smartest of humans, with more strategic savvy, are a conceptual possibility that must be reckoned with. China, Russia, and the United States are approaching this possibility in different ways. The statements and research priorities released by major powers reveal how their policymakers think AI’s developmental trajectory will unfold.
China is pursuing the most aggressive strategy, focusing on developing advanced AI that could contribute to strategic decision-making. The U.S. approach is more conservative
China is pursuing the most aggressive strategy, focusing on developing advanced AI that could contribute to strategic decision-making. The U.S. approach is more conservative, with the goal of producing computers that can assist human decision-making but not contribute on their own. Finally, Russia’s projects are directed at creating military hardware that relies on AI but leaves decisions about deployment entirely in the hands of generals. In all three situations, the forms of AI these governments are investing their resources in reveal their expectations of the technological future. The country that gets it right could reap huge benefits in terms of military might and global influence.
When China’s State Council released the New Generation report in 2017, its ambition to create a technological infrastructure capable of churning out AI technologies no one else possesses greatly amplified talk of an AI arms race. The report recognizes that the exclusive control of a technology has the potential to open up a “first-mover advantage,” which allows a nation to make, and consolidate, gains before competitors can catch up. A doctrine developed by the People’s Liberation Army called “intelligentization” guides much of the plan by envisioning the future uses of AI.
The Chinese army sees computers as a way to respond to the vast amount of information available to the commanders of modern armed forces. Precise GPS locations of all one’s own units, as well as drone and satellite reports on the adversary’s, provide too much information for human cognitive capabilities. To alleviate this problem, the New Generation report commits to creating “strong supports to command and military decision-making [and] military deduction.” Strong, generalized AI—systems able to outperform humans in complex, changing environments—could process much more battlefield information than humans can, giving the military in control a substantial advantage over those with less ability to utilize information. But a great deal more research is needed before an AI system advanced enough to represent the fluctuating circumstances of a battlefield and advise commanders is possible.
China’s research strategy depends on ensuring that academic, military, and commercial research efforts are directed toward the same ends. The New Generation plan gives much of the responsibility for this research to the behemoth Chinese tech companies—Baidu, Alibaba, and Tencent. These firms form a “national team” expected to research different areas; for instance, Alibaba is responsible for so-called smart cities and Tencent is responsible for computer vision and medical applications.
Multiple National Engineering Laboratories have also been established, working on both state-of-the-art paradigms, like deep learning, and not yet feasible techniques for constructing machine intelligence. Baidu in 2017 established one devoted to brain-inspired intelligence technology, which aims to simulate the exact functions of the brain. The hope is to achieve human-level AIthrough imitation. These technologies may be necessary for machine intelligence to outperform human decision-making in complex environments.
By contrast, U.S. national security circles seem to doubt AI will be capable of human-level thinking in the near future. In an interview late in his second term, then-U.S. President Barack Obama said, “my impression, based on talking to my top science advisors, is that we’re still a reasonably long way away” from “generalized AI”—just what the Chinese army has been theorizing about. Instead, Obama argued that the continued development of “specialized AI,” i.e., programs with one narrow use, was the most pragmatic course for innovation in the near term. Trump’s recently announced American AI Initiative seems to proceed from the same assumptions. The plan doesn’t introduce many concrete measures, mostly directing sub-departments to prioritize AI and share their data.
The most substantial indications of the U.S. view on AI’s national security impact come from the “Third Offset Strategy,” developed and initiated by Bob Work and Ash Carter, two Department of Defense officials, during Obama’s time in office. The plan has a strong focus on “human-machine collaboration,” in Work’s words. Rather than creating fully autonomous systems, the emphasis is on systems in which machines provide data and analysis to human operators, who then take action. This “teaming” is meant to use “the machine to make the human make better decisions” than either computers or humans could have made in isolation, according to Work. For example, one Pentagon project is using “computer vision” to assist human drone operators by analyzing the incoming video feeds and identifying targets. Using AI to provide data to people who then make decisions themselves is a more achievable task than creating more autonomous technologies.
The U.S. approach—working with extant technology and avoiding overenthusiasm about what might come next—could ensure maximum advantage from current capabilities. A strategy of incremental progress risks being overtaken by those pursuing transformative leaps
A strategy of incremental progress risks being overtaken by those pursuing transformative leaps, however. Some of the applications of AI being pursued move in the direction of intelligentization, but never fully commits to the undertaking. This leaves the U.S. conceptualization of AI somewhere between a mere resource and a technology capable of making strategic contributions.
Like China, the United States also plans to rely on the private sector to develop AI for the national security domain. Applications of deep learning in the private sector—facial recognition software, object classification—also have many national security applications, including use in drones and population monitoring. The U.S. tech sector is the largest and most advanced in the world. However, the tech industry’s nearly unfettered capitalism—the overriding concern for shareholders, hostility to regulation and government oversight, and distrust of other firms—presents challenges to public-private partnerships.
These competitive pressures make it unlikely that Google and Facebook would share trade secrets, even if it would advance U.S. national interests. In China’s hybrid economic system, some large tech companies retain strong ties to the government and are willing to collaborate with one another in a national team. There are also disincentives to entering into public-private partnerships, as the furor over Google’s involvement with the Pentagon showed. Thousands of Google’s own employees signed a petition demanding the company cease contributing to Project Maven, a military effort to use AI to process video from drones.
The Russian vision for AI is less ambitious than those of China and the United States. Its policy has a clear focus, the narrowest of the three, on applying AI to military hardware. The goal is not to create ways to make better decisions, but simply to make better weapons. The projects do not rely on particularly complex AI systems, reducing the amount of research needed to get off the ground.
For example, one much-publicized project is a self-guided missile able to change course midflight, meant to evade missile defenses. Kalashnikov, Russia’s most famous arms manufacturer, is developing stationary machine guns that use neural networks to choose and engage targets. Once placed and activated, these turrets would be able continuously scan an area, take aim at any intruders, and fire, all without human input. These technologies are an attempt to refine conventional weapons using the possibilities afforded by AI, without pushing the envelope much further.
The limited scope of the Russian program, relative to those of China and the United States, is a consequence of a lack of opportunity as much as strategic outlook. The Russian digital economy is much smaller than those in the United States and China, leaving the government without strong tech partners. Instead, the Russian defense industry is leading the research and application of AI. The close links between the Kremlin and the defense industry have aided a generation of so-called bespoke weapons. The state owns a controlling stake in the Kalashnikov group and the entirety of Tactical Missiles Corporation, which is creating the self-guiding missiles. As in China and the United States, Russia’s strategy leverages features of its domestic economy into its AI program.
Although it’s unlikely Russia could compete in a race toward complex, human-level AI, it could certainly contend in a more traditional arms race.
Although it’s unlikely Russia could compete in a race toward complex, human-level AI, it could certainly contend in a more traditional arms race.This would be a technological race as well as a material one as the large militaries of the world compete for better applications of AI as well as volume. A comparatively smarter drone or submarine could lose to a greater number of less independently thinking ones. In truth, both these races are occurring at once, since nations pursuing revolutionary AI still need to invest in less advanced alternatives as insurance.
The relative importance of these two races depends on whether strong AI is possible—and when. In surveys of experts conducted between 2011 and 2013, Vincent Müller and Nick Bostrom found that many thought strong AI will be achieved fairly soon; the median prediction for a 50 percent chance of success was 2040, with an assessment of a 90 percent likelihood by 2075. Another survey, published in 2018, found that some experts believed there was “a 50% chance of AI outperforming humans in all tasks in 45 years.” In both surveys, however, some experts doubted human-level AI is possible, and many who did support its chances put the date hundreds of years in the future.
In the absence of a reliable way to settle these questions, nobody knows who’s ahead. Uncertainty over opponents’ capabilities in an arms race can act as a dangerous accelerant, pushing governments to devote ever more resources and to accept more risky applications of those technologies. During the Cold War, imperfect information and mutual suspicion drove stockpiles of nuclear weapons higher and higher. Secrecy and obfuscation led to exaggerations of the capabilities of others, motivating heavy-handed responses.
However, the lack of insight into the nature of AI faced by today’s leaders is different from informational challenges during the Cold War. Policymakers aren’t confused about the quantity of adversaries’ capabilities but about their future capabilities after years of research and development. If AI achieves superhuman levels of intelligence, it would revolutionize the global balance of power and relegate the losers to a permanent second-class status.
As knowledge about AI advances, the race will increasing resemble the nuclear buildup of the Cold War. Although there was uncertainty over the magnitude of nuclear capabilities, what missiles could do was clear, as was the impact of a gap of this or that many. U.S. fears of a “missile gap” in both the late 1950s and 1970s motivated buildups. After those buildups, the Soviets, fearful of losing a nuclear confrontation, stockpiled biological weapons despite international bans and deadly accidents.
Once China or the United States is confident in a stable lead, they will have few incentives to compromise or share technology.
Once China or the United States is confident in a stable lead, they will have few incentives to compromise or share technology.The risk of being unexpectedly overcome will fade with an understanding of the necessary steps to build strong AI. Those nations that fall far behind may have to resort to desperate measures, like rushing unsafe technology into testing or application. Leaders may be forced to consider preemptive actions in order to stymie an adversary’s development, by way of hacking or sabotage of computing resources, even if those steps invite escalation. Clarity will intensify competition.
In the next few years, a race that can only destabilize relations between major powers can still be stemmed. The less that’s known, the less certain each nation can be that its strategy is the correct one. Much like hedge funds, governments recognize the prudence of keeping risk within acceptable parameters, accepting some, but avoiding irresponsible gambling. All participants in this technological race would benefit from controlling competition, because they would no longer risk losing.
Recently, Xi Jinping and Liu He, China’s premier and vice premier, have called for greater collaboration in AI research. They’ve offered to share the results of basic AI research with “the global village,” which, if reciprocated, would ensure that no one country achieves a decisive strategic advantage based on a surprise scientific breakthrough. Countries and companies would still compete over how best to apply the basic frameworks discovered through research, but it would take place on a more level playing field.
Hopefully, the leaders of other countries respond to these overtures in kind, taking the opportunity to halt this nascent arms race. There are many barriers to cooperation: International competition in AI is not insulated from other contentious issues, including trade and nuclear arms control, that separate the aforementioned countries. But if they feel enough urgency, leaders could put aside some issues in order to agree to cooperate on AI. By the time sufficient urgency develops, however, one worries the race will be too far along to stop.
Adrian Pecotic is a freelance writer based in Sarajevo. He has also written for The Blizzard Quarterly and Mental Health Today.
No comments:
Post a Comment