Pages

8 April 2024

How to fix the military’s software SNAFU

JOHN SPEED MEYERS

The only institution more mired in acronyms than the U.S. military is, in my experience, the software industry. The former’s thorough embrace of the latter is reflected, for example, in this recent piece by serious commentators that includes a four-page glossary. To be sure, software’s ability to supercharge military operations make this alphabet soup palatable—but it also conceals a dangerous security SNAFU.

If software is to be more of a benefit than a liability, its inevitable flaws must be spotted and fixed before they can be exploited by China, Russia, and other adversaries. Unfortunately, in an analysis I conducted of popular open source software made available by the Pentagon for its units and contractors to use, there is strong evidence that the U.S. military is shipping software that is insecure and contains many known software vulnerabilities—CVEs, in software-speak.

Fortunately, the U.S. military, elected leaders, and the public don’t need to accept this situation as normal. There are technical and organizational solutions that would allow the military to embrace software safely. Creating safe and toil-free software requires, at a minimum, rethinking the links in the military’s software supply chain and preferring software that is rapidly updated. It also requires reconsidering the idea that there should be a single, free military-run repository of safe software. The software industry loves the idea of a “single source of truth,” but this totalitarian thinking, which military bureaucracies sometimes prefer too, is a recipe for disaster in the fast-moving world of software.

An unacknowledged underbelly

The military, like corporate America, has embraced digital transformation enthusiastically. There are now dozens of military “software factories.” Military contractors want to be “software primes.” The Atlantic Council has even recently created the Commission on Software-Defined Warfare.

At first blush, this “software defines everything” mentality is reasonable and healthy. The pervasiveness of computers and software in modern military organizations is undeniable. The Pentagon, once the largest office building in the world, is today a digital warren, with computers and cables everywhere. Command centers are mostly screens and databases. Furthermore, the intense interest in generative artificial intelligence will only further entwine military operations with software. Indeed, how could the U.S. military not embrace software and software development?

But there’s a dark side of the military’s pervasive use of software. The unfortunate fact is that most software is full of security flaws—not just “zero days,” or undiscovered security flaws, but also known security flaws. And the number of known flaws discovered each year is growing rapidly: from about 6,000 in 2015 to nearly 29,000 in 2023. Importantly, many of these flaws are in software components: chunks of code that are used and reused, often widely, to build entire applications.

There are at least two dangers of using software with known flaws. The first is that enemies may use them to gather intelligence and hinder U.S. military operations. Indeed, recent software security research suggests that most campaigns by hostile militaries and intelligence agencies exploit known flaws, not zero-days.

The second is drowning a military software organization with the toil associated with identifying, triaging, and remediating known vulnerabilities to meet compliance and security requirements. When a colleague and I interviewed software professionals at ten organizations, we discovered that it is common for many modern software organizations to spend thousands of staff hours on vulnerability management each year. One U.S. military unit we talked to was likely spending 15,000 hours of staff time per year on vulnerability management. This is an unacknowledged underbelly of the so-called digital transformation.

How common, then, are known software vulnerabilities in the military’s software?

A look inside Iron Bank

To answer this, an analyst would ideally take a random sample of software used by or developed by the U.S. military and analyze it. Unfortunately—and setting aside classification issues—there is no such known dataset.

There is, however, a decent window into U.S. military software: software components made available by the U.S. Air Force via its Iron Bank website. These components are “Lego blocks,” to borrow the terminology of a former chief software officer of the Air Force, that military software teams and military contractors can use to build and deploy software applications. In technical terms, they are software “containers,” an efficient method for bundling open source software components and custom code into a larger software application. Conveniently for researchers, the containers in Iron Bank are free for anyone to inspect and analyze.

One particularly popular and important set of “Lego blocks” within Iron Bank is the containers that comprise the military’s Big Bang platform, a “software factory in a box.” An analysis of the containers that make up most of this platform finds that they contain, on average, 57 known vulnerabilities. Each vulnerability represents, at a minimum, a hassle for the software security specialists responsible for evaluating the software and, more troublingly, a risk of exploitation.

Software security experts will point out that not all software vulnerabilities are created equal. Some are indeed more severe than others, creating a higher level of risk for users. This analysis therefore also considered whether these Iron Bank images also contained critical or high-severity vulnerabilities. On average, each container had at least one.

Can the flaws be fixed?

Ideally, the U.S. military would not provide software building blocks filled with known vulnerabilities to its own units and contractors.

One approach to addressing this problem is technical. The military should prioritize acquiring software that can be rapidly updated so that new vulnerabilities, when they inevitably emerge, can be fixed quickly. Traditional vendors of open source software, including Linux distributions like Red Hat Enterprise Linux and community-run distributions like Debian, do not prioritize rapid updating in the name of offering stable software. Relatively new Linux distributions like Alpine and Wolfi offer this rapid software updating and simultaneously attempt to provide stability. Iron Bank container images have only recently begun to be “based” on Alpine and Wolfi. This trend should continue.

The second approach is organizational. For example, Iron Bank currently offers only one price to its users: free. While this may seem “low cost,” software companies have no motivation to provide secure and high-quality components to Iron Bank because they cannot charge. If Iron Bank created “private” repositories that allowed companies could charge for access to high-quality software building blocks, it seems likely that Iron Bank images would have dramatically fewer known vulnerabilities.

But perhaps Iron Bank itself is a problem. It’s unclear that the U.S. military has a comparative advantage in maintaining a repository of hardened container images. There are now a number of companies competing to do this. The military doesn’t build its own laptops; perhaps it shouldn’t be building its own repository of hardened container images.

The military and the software industry are, in this analysis, too alike. Both are too fond of acronyms and both ship software with too many known vulnerabilities. Defense leaders looking to harness the digital transformation ought not wait until their software is FUBAR.

John Speed Meyers is the head of Chainguard Labs at Chainguard. He leads a research team focused on open source software security. He previously worked at In-Q-Tel and the RAND Corporation.

No comments:

Post a Comment