Pages

17 April 2021

Artificial Intelligence and the Future of Warfare

By Clive Williams

Artificial intelligence is changing the world we live in. It will redefine the workplace and have significant implications for everything we do, probably by the end of this decade. Some AI applications are already a part of our everyday lives, such as intelligent car navigation systems.

So, what is artificial intelligence? AI can be defined as ‘the ability of machines to perform tasks that normally require human intelligence’.

AI has in fact been around for several decades. The IBM chess-playing computer called ‘Deep Blue’ defeated world chess champion Garry Kasparov as far back as 1997. But the development of AI has been accelerating rapidly in recent years with a substantial increase in the number of real-world applications where AI is now practical.

According to the US Department of Defense’s Joint Artificial Intelligence Center, the reasons for this are more massive datasets, increased computing power, improved machine-learning algorithms, and greater access to open-source code libraries.

Let’s look at these in turn.

First, massive datasets. Today, computers, digital devices and sensors connected to the internet are constantly producing and storing large volumes of data, whether in the form of text, numbers, images, audio or other data files.

Second, increased computing power has come from graphics processing units, or GPUs, that are highly parallelised, which means they can perform large numbers of calculations at the same time. Massive parallelism is speeding up the training of AI models and running those models operationally.

Third, better algorithms have made machine-learning models more flexible, more robust and more capable of solving different types of problems.

Fourth, access to open-source code libraries now allows organisations to use and build on the advanced work of others without having to start from scratch.

In defence areas, it means that autonomous platforms could in the future operate without the need to carry a vulnerable or capability-limiting human, be they airborne or underwater systems. Platforms should still be controlled by humans for moral, ethical and legal reasons, but ‘kill’ decisions could end up being distilled down to simple ‘yes’ or ‘no’ commands.

A control problem might arise in a conflict situation where there’s difficulty in maintaining a continuous link to an autonomous platform. In those circumstances, for example, an underwater attack drone placed on the ocean bottom in the Sunda Strait could be programmed to identify enemy submarines and automatically destroy them if they pass by.

The same decision delegation could be made to long-endurance solar-powered drones programmed to eliminate key insurgents and terrorists through facial or gait recognition.

AI could mean that high-tech countries like Australia will have far fewer battle casualties, but glamourous hotshot jobs like being a fighter pilot or stealthy submariner might no longer exist. Instead, commanders and combatants would more likely be controlling robot attackers from behind computer screens in relatively safe locations.

The ‘risk to blue force’ factor will of course depend on the nature of the enemy—whether it’s a low-tech insurgent group like al-Shabaab or a high-tech nation-state like China.

The most practical areas to invest in military AI—at least in the short term—are those that are relevant to technologically uncontested domains, like Afghanistan. In state-versus-state conflict we must assume that potential adversaries are a match for us in AI—if not more advanced in the case of China—and will attempt to manipulate our AI systems or destroy them kinetically.

For the time being at least, AI is still short of human achievement when it comes to multitasking. A soldier can identify an enemy target, decide on a weapon system to employ against it, predict its path, and engage it. AI cannot currently accomplish that simple set of related tasks.

Potential AI problems can also arise from biased data and context misunderstanding. AI systems have problems distinguishing between correlation and causation. (One example is the correlation between drowning deaths and ice-cream sales. An AI system fed with data about these two outcomes would not be aware that the two correlate because they are a function of warmer weather. AI might conclude that to prevent drowning deaths, we should restrict ice-cream sales.)

However, AI is rapidly overcoming these basic limitations and we could see a much more comprehensive range of AI capabilities by 2030, particularly in the defence sector.

In the interim, we are likely to see an exponential increase in the use of AI in crewed platforms to allow humans to concentrate on the tasks that humans do better.

In the longer term, removing humans from combat platforms is a logical and likely progression. The US Air Force’s next-generation air superiority fighter is expected to include optional unmanned autonomous operation. And autonomous unmanned underwater attack platforms are expected to be in service with China’s navy long before Australia gets its first Attack-class submarine in the mid-2030s.

No comments:

Post a Comment