FROM CYBER OPERATIONS to disinformation, artificial intelligence extends the reach of national security threats that can target individuals and whole societies with precision, speed, and scale. As the US competes to stay ahead, the intelligence community is grappling with the fits and starts of the impending revolution brought on by AI.
The US intelligence community has launched initiatives to grapple with AI’s implications and ethical uses, and analysts have begun to conceptualize how AI will revolutionize their discipline, yet these approaches and other practical applications of such technologies by the IC have been largely fragmented.
As experts sound the alarm that the US is not prepared to defend itself against AI by its strategic rival, China, Congress has called for the IC to produce a plan for integration of such technologies into workflows to create an “AI digital ecosystem” in the 2022 Intelligence
Authorization Act.
The term AI is used for a group of technologies that solve problems or perform tasks that mimic humanlike perception, cognition, learning, planning, communication, or actions. AI includes technologies that can theoretically survive autonomously in novel situations, but its more common application is machine learning or algorithms that predict, classify, or approximate empiric-like results using big data, statistical models, and correlation.
While AI that can mimic humanlike sentience remains theoretical and impractical for most IC applications, machine learning is addressing fundamental challenges created by the volume and velocity of information that analysts are tasked with evaluating today.
At the National Security Agency, machine learning finds patterns in the mass of signals intelligence collects from global web traffic. Machine learning also searches international news and other publicly accessible reporting by the CIA's Directorate of Digital Innovation, responsible for advancing digital and cyber technologies in human and open-source collection, as well as its covert action and all-source analysis, which integrates all kinds of raw intelligence collected by US spies, whether technical or human. An all-source analyst evaluates the significance or meaning when that intelligence is taken together, memorializing it into finished assessments or reports for national security policymakers.
In fact, open source is key to the adoption of AI technologies by the intelligence community. Many AI technologies depend on big data to make quantitative judgments, and the scale and relevance of public data cannot be replicated in classified environments.
Capitalizing on AI and open source will enable the IC to utilize other finite collection capabilities, like human spies and signals intelligence collection, more efficiently. Other collection disciplines can be used to obtain the secrets that are hidden from not just humans but AI, too. In this context, AI may supply better global coverage of unforeseen or non-priority collection targets that could quickly evolve into threats.
Meanwhile, at the National Geospatial-Intelligence Agency, AI and machine learning extract data from images that are taken daily from nearly every corner of the world by commercial and government satellites. And the Defense Intelligence Agency trains algorithms to recognize nuclear, radar, environmental, material, chemical, and biological measurements and to evaluate these signatures, increasing the productivity of its analysts.
In one example of the IC’s successful use of AI, after exhausting all other avenues—from human spies to signals intelligence—the US was able to find an unidentified WMD research and development facility in a large Asian country by locating a bus that traveled between it and other known facilities. To do that, analysts employed algorithms to search and evaluate images of nearly every square inch of the country, according to a senior US intelligence official who spoke on background with the understanding of not being named.
While AI can calculate, retrieve, and employ programming that performs limited rational analyses, it lacks the calculus to properly dissect more emotional or unconscious components of human intelligence that are described by psychologists as system 1 thinking.
AI, for example, can draft intelligence reports that are akin to newspaper articles about baseball, which contain structured non-logical flow and repetitive content elements. However, when briefs require complexity of reasoning or logical arguments that justify or demonstrate conclusions, AI has been found lacking. When the intelligence community tested the capability, the intelligence official says, the product looked like an intelligence brief but was otherwise nonsensical.
Such algorithmic processes can be made to overlap, adding layers of complexity to computational reasoning, but even then those algorithms can’t interpret context as well as humans, especially when it comes to language, like hate speech.
AI’s comprehension might be more analogous to the comprehension of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. “For example, AI can understand the basics of human language, but foundational models don’t have the latent or contextual knowledge to accomplish specific tasks,” Curwin says.
“From an analytic perspective, AI has a difficult time interpreting intent,” Curwin adds. “Computer science is a valuable and important field, but it is social computational scientists that are taking the big leaps in enabling machines to interpret, understand, and predict behavior.”
In order to “build models that can begin to replace human intuition or cognition,” Curwin explains, “researchers must first understand how to interpret behavior and translate that behavior into something AI can learn.”
Although machine learning and big data analytics provide predictive analysis about what might or will likely happen, it can’t explain to analysts how or why it arrived at those conclusions. The opaqueness in AI reasoning and the difficulty vetting sources, which consist of extremely large data sets, can impact the actual or perceived soundness and transparency of those conclusions.
Transparency in reasoning and sourcing are requirements for the analytical tradecraft standards of products produced by and for the intelligence community. Analytic objectivity is also statuatorically required, sparking calls within the US government to update such standards and laws in light of AI’s increasing prevalence.
Machine learning and algorithms when employed for predictive judgments are also considered by some intelligence practitioners as more art than science. That is, they are prone to biases, noise, and can be accompanied by methodologies that are not sound and lead to errors similar to those found in the criminal forensic sciences and arts.
“Algorithms are just a set of rules, and by definition are objective because they’re totally consistent,” says Welton Chang, cofounder and CEO of Pyrra Technologies. With algorithms, objectivity means applying the same rules over and over. Evidence of subjectivity, then, is the variance in the answers.
“It’s different when you consider the tradition of the philosophy of science,” says Chang. “The tradition of what counts as subjective is a person's own perspective and bias. Objective truth is derived from consistency and agreement with external observation. When you evaluate an algorithm solely on its outputs and not whether those outputs match reality, that’s when you miss the bias built in.”
Depending on the presence or absence of bias and noise within massive data sets, especially in more pragmatic, real-world applications, predictive analysis has sometimes been described as “astrology for computer science.” But the same might be said of analysis performed by humans. A scholar on the subject, Stephen Marrin, writes that intelligence analysis as a discipline by humans is “merely a craft masquerading as a profession.”
Analysts in the US intelligence community are trained to use structured analytic techniques, or SATs, to make them aware of their own cognitive biases, assumptions, and reasoning. SATs—which use strategies that run the gamut from checklists to matrixes that test assumptions or predict alternative futures—externalize the thinking or reasoning used to support intelligence judgments, which is especially important given the fact that in the secret competition between nation-states not all facts are known or knowable. But even SATs, when employed by humans, have come under scrutiny by experts like Chang, specifically for the lack of scientific testing that can evidence an SAT’s efficacy or logical validity.
As AI is expected to increasingly augment or automate analysis for the intelligence community, it has become urgent to develop and implement standards and methods, which are both scientifically sound and ethical for law enforcement and national security contexts. While intelligence analysts grapple with how to match AI’s opacity to the evidentiary standards and argumentation methods required for the law enforcement and intelligence contexts, the same struggle can be found in understanding analysts’ unconscious reasoning, which can lead to accurate or biased conclusions.
No comments:
Post a Comment