Pages

1 October 2022

Democracy and Autocracy in the Fight for AI’s Future

Harrison Schramm & Regan Copple

The conflict that will define the next era pits liberal democracies against authoritarian / autocratic forms of government, and the battlefield will be shared between the literal battlefield as well as the development and deployment of advanced technologies, to include but not limited to Artificial Intelligence (AI). This was reflected in many of today’s comments around how the free countries of the world will employ AI in this conflict as opposed to how the adversaries of freedom will; discussion of this conflict was the key issue at the recent Special Competitive Studies Project Summit (SCSP).

In this piece, we take the SCSP’s question and turn it on its head, expanding it into a far greater issue. Not only do we need to consider how AI will shape the current competition, but also how this competition will shape the future of AI. We begin with the assumption that the differences in what is desirable and expected for AI between free and autocratic governments are stark. It is our opinion that the strongest protections and championship for individuals in in this space are currently in the European Union[i], with the United States following very closely behind. In short, the US and her Allies believe in both the agency and freedom of the individual. Taken from a sufficient perspective, both sides of the US political aisle – in their own ways – champion individuality and freedom, although they do not completely agree on this means. The situation is not bleak however, because while free people do not agree on what ‘free’ means, we can rapidly, strongly, agree on what it does not mean, and we should work with that as a start.

The Battle for the Soul of AI

Western computing will aim to reflect Western values; in the free world, the ever-present risk is that AI will be used as a vector for injustice through improper application and insufficient vigilance to discover pathologies before deployment. Implicit in this statement is a primacy of humans, collectively – not just ‘some’ humans. The value reflection is partially because of what the consumers of the AI will expect, but also a reflection of the individuals who write code, who make an innumerable assumptions and small decisions in development that cascade into the fundamental nature of the algorithm.

As professionals with one foot each in the technology and policy camps, we are ever acutely aware that the ultimate impact of high-end computing is to impact the lives of individual humans. The balance between making the ‘wrong choice’ in either direction - yes when we should say no, and more frighteningly, no when we should say yes – is ever on our minds. While I can only speak to my own emotions, I passionately believe that responsible practitioners live in fear; we are afraid that we are doing something wrong the humans who are impacted by these algorithms – fear that we are doing it wrong. In the West, when mistakes happen, learning happens – both in the spheres of technology as well as in the law. We are ever optimistic that this learning is in the direction away from injustice; and away from giving greater power to AI than is warranted or necessary. Compared to other forms of government, democracies will tend reflect the value of an induvial’ s agency in their systems.

In stark contrast, large scale computing developed and used by the governments of China and Russia will – and do – reflect their values. Specifically, the value of leveraging an advantage in power to further increase advantages in power. Ironically, the fear described in Western development and deployment of computing systems is astonishingly close to the fear felt by autocrats; both share fear of the humans impacted by algorithms. The key difference is that this fear is not one of doing wrong by those they serve, but one of being overthrown by them. With objectives formed in this manner, the incentives for a designer quickly move from service to subjugation[ii]. Injustice executed by these AI systems is not accident, but by design – the injustice is the point.

An Historical Parallel

The conflict between free societies and authoritarianism will spill over into the control of the pathways by which AI will develop and the choice of alternate pathways may reverberate and impact the human experience for upwards of centuries. This raises the stakes of a contest for the soul of AI.

An imperfect but illustrative parallel might be to consider how the development of nuclear power – and nuclear weapons post WWII – would have been different in an alternate universe where the Axis powers had developed the technology first. Not only would the outcome of the war had been dramatically different, but also the willingness to use nuclear weapons in a tactical setting may have been much different as well. We cannot construct this argument without acknowledging that the United States was the first – and to date only – power to use nuclear weapons in war. This first use at the end of the Second World War is balanced by the non-use during the near decade that the United States enjoyed nuclear primacy. There were certainly cases that could have been made during this period for the use of nuclear weapons. The differences between that world and this one is too great to try to cover in this short piece, but we feel confident in stating that the differences would be real – and stark. This parallel is also poignant as the current maturity of AI in competition aspects is likely to be similar to the maturity that was seen at the end of the Cold War.

It’s about resources

Ultimately both the post-WWII Arms Race and the ongoing competition that includes AI both rest on resources. The trend in resources worldwide is that they are increasingly measured by the talent and education of the people who participate over the raw materials. It is for this reason that the United States and her allies need to continue to pursue pathways to invigorate participation in this workforce at all levels – providing encouragement and mentorship (as well as educational opportunities) across the whole of American Society for those who have the talent and determination to do this type of work, and this is a place for legislation[iii]. Additionally, the United States need to keep the ability to build the computing infrastructure for technology, and we were happy to see the recent passage of the CHIPS[iv] act.

When lawmakers on both sides of the aisle consider the investments in the future of technology, we encourage them to think through the lens of history, and realization that the coming competition will reverberate far beyond the geopolitical threads of today.

No comments:

Post a Comment