By Jon Harper
U.S. Special Operations Command and the Intelligence Advanced Research Projects Activity are pursuing new technologies to identify and track threats.
Commandos rely on these types of capabilities when attacking terrorist groups and performing other critical missions.
“Intel drives ops,” SOCOM Commander Gen. Richard Clarke said at a recent Senate Armed Services Committee hearing. “In order for us to compete more effectively in the future, we have to modernize both our precision strike and ISR … so that [special operators] can quickly see and sense the battlefield that they may have to be fighting in.”
Encrypted communications and electronic warfare capabilities are also critical to protect the force, he noted.
SOCOM’s program executive office for special reconnaissance is responsible for pursuing these types of technologies.
The office’s mission “is to lead the rapid and focused acquisition of state-of-the-art sensors and associated command-and-control, emplacement, recovery and specialized communication systems across all domains to enable total situational awareness for Special Operations Forces,” PEO David Breede said in an email to National Defense.
Its technology portfolio encompasses technical collection and communication to include hostile forces tagging, tracking and locating; blue force tracking; tactical video systems for reconnaissance, surveillance and target acquisition; and remote advise-and-assist kits.
It also includes integrated air-, maritime- and ground-based sensor systems; signals intelligence processing, exploitation and dissemination; sensitive site exploitation with biometrics, forensics and intelligence analysis capabilities; and leveraging of national space-based technologies.
“We’re really looking at operations in a near-peer and non-permissive environment,” Breede said at last year’s virtual Special Operations Forces Industry Conference managed by the National Defense Industrial Association on behalf of SOCOM.
Breede’s top three priorities for technology development are advanced unattended ground sensors, flexible tactical radio frequency systems and collaborative autonomous platforms, he noted in his email.
“While reducing size, weight and power requirements of unattended ground sensors will always be a focus area, the key to modernization will be increasing onboard processing power, integrating alternative communication pathways, and improving interoperability with disparate sensor networks,” he said.
Such technology can help commandos gather critical intelligence without having to put “boots on the ground” in dangerous areas and remote locations. They can also facilitate the advising of foreign partners without SOF being “shoulder to shoulder” with them on the front lines, Breede has said.
To bolster communications, small tactical RF systems should become more flexible through not only software-defined radios, but also with frequency agile antennas and modularity across platforms and domains, he noted. SOCOM uses radios to transmit imagery as well as voice and text communications.
PEO Special Reconnaissance is also looking beyond today’s remotely-operated intelligence-gathering systems and is eyeing collaborative autonomous platforms.
“Autonomy is crucial to the ability to operate in contested environments where traditional communication and navigation solutions may be challenged,” Breede said. “Collaborative autonomy enables unmanned platforms to operate based on a shared understanding of the environment without active operator control in those contested environments.”
Special Operations Command has high hopes that artificial intelligence and machine learning capabilities will help reduce manpower requirements for deploying robotic platforms.
“Today you’ve got an operator that’s teamed up one-on-one with an unmanned aerial system, and it completely takes him out of the fight as he’s maneuvering that” asset, said James Smith, SOCOM’s acquisition executive.
“We are getting better ISR from … the unattended ground sensors, the unmanned aerial systems,” he added. “The problem is each one of those sensors takes an operator off the line. So how do we use artificial intelligence and machine learning to get those sensors to interoperate autonomously and provide feedback to a single operator to enable that force to maneuver on the objective?”
Autonomous drones or ground robots equipped with AI could be used to clear areas such as buildings or tunnels and free up SOF maneuver forces to be much more effective and efficient on the battlefield as they pursue their mission objectives, he noted.
Special Operations Command also wants to build upon the machine learning capabilities that were demonstrated by Project Maven, which utilized the technology to sort through a deluge of video footage collected by drones in war zones such as Afghanistan and identify items of interest, Clarke noted. The technology helped separate the wheat from the chaff and greatly facilitated intelligence processing, exploitation and dissemination.
“We can now pull in … terabytes worth of data,” he said during a panel discussion hosted by the Hudson Institute. “A human cannot sort through and sift through this in sufficient detail, nor quickly enough to get to the pertinent information. So I think that’s an important part of what … Project Maven has bought into this with object detection, so that humans can do only those things that humans have to do and try to get the machines to do all those other things.”
Being able to speed up SOCOM’s targeting cycle without requiring hundreds of analysts to examine intelligence is critical, he added.
Meanwhile, the Intelligence Advanced Research Projects Activity, also known as IARPA, has a new program to develop next-generation surveillance capabilities for the national security community.
The organization, which falls under the Office of the Director of National Intelligence, invests in high-risk, high-reward research efforts that seek to overcome some of the most difficult technical challenges facing U.S. spy agencies.
The Biometric Recognition and Identification at Altitude and Range, or BRIAR, program aims to cultivate new software algorithm-based systems capable of performing “whole body” biometric identification from drones and other platforms.
“Many intelligence community and Department of Defense agencies require the ability to identify or recognize individuals under challenging scenarios, such as at long-range, … through atmospheric turbulence, or from elevated and/or aerial sensor platforms,” according to an IARPA description of the program. “Expanding the range of conditions in which accurate and reliable biometric-based identification could be performed would greatly improve the number of addressable missions, types of platforms and sensors from which biometrics can be reliably used, and quality of outcomes and decisions.”
Mission applications of the technology could include counterterrorism, force protection, defending critical infrastructure and border security, noted program manager Lars Ericson.
However, the quality of imagery gathered by drones and other elevated surveillance platforms is often hindered by a number of factors that make it more difficult to accomplish biometric recognition, he said during a presentation to industry.
Atmospheric turbulence is a major problem that the agency hopes to overcome through the BRIAR program. “That introduces blur and distortion and intensity fluctuations due to dynamic changes in air molecules in … that optical path between the target and the sensor,” Ericson explained.
Leveraging “probe video” footage also presents hurdles, he noted.
“In this case, you have different problems that are present in the imagery,” he said. “You have a very brief view of the subject of interest.
It’s a severe look angle. There’s a high pitch angle there, and of course there’s motion and resolution challenges as well. And this would prove difficult to do accurate and reliable matching.”
While facial recognition — including long-range and “unconstrained” facial recognition — is a key capability of interest, the intelligence community needs “whole body” biometrics, he noted.
“There is a reliance on face recognition now,” Ericson said. “That’s not surprising. Face recognition has made significant advances over the last several years, but there was a benefit or a desire to be able to leverage additional biometric signatures or information in a given scene that can augment or inform or fuse with face [recognition] to improve your reliability and accuracy of those” matches.
That could include detecting and analyzing body shape, movement, measurements, or other aspects of a human form for the purposes of recognition, identification, or verification.
For example, drones could watch a group of individuals walk across an area and try to pick out persons of interest using a variety of metrics.
“Motion, gait, perhaps body shape or anthropometric information — if you could leverage that or extract that … that has promise to be able to improve the ability to do biometric matching.”
A capability known as person re-identification, or ReID, is also on the wish list. That includes systems that can identify the color and shape of an individual’s clothing, as well as their gender, age, hair style and items they might be carrying such as backpacks.
“ReID is a problem where you’re trying to identify other sightings of a person with different camera networks. Where else have you seen this person?” Ericson explained. “This work is a pretty hot topic in computer vision. There’s a lot of activity here and it’s primarily driven by the use cases around smart city and public safety” technology.
To be successful, BRIAR must advance multi-modal fused biometric signatures such as whole body identification, build on unconstrained facial recognition capabilities, and collect large amounts of relevant data, Ericson said.
Desired program “deliverables” include: image matching at long range (100 to 1,000 meters); matching at severe pitch views (20 to 50 degrees); atmospheric turbulence mitigation; multi-image templates from video; body and face localization in moving video; cross-view whole body matching both indoors and outdoors; robustness against incomplete or occluded views; and multi-modal fusion, according to Ericson’s slides.
Solutions must be agnostic to sensor platforms and optics; adapt to edge processing and real-time streaming; accurate across diverse demographics and body shapes; invariant to pose, illumination, expression and clothing changes; and adapt or transfer solutions to be used in different platform-specific environments.
“The [technology] evaluation is going to be conducted on the aggregated evaluation sets that have images of subjects across a wide range of sensors and platforms,” Ericson said. “That’s how we’re going to fundamentally evaluate the statistical performance of these algorithms.
And so they need to be agnostic or at least robust to the kinds of sensor platforms and optics” that will be used during testing.
The four-year program is expected to kick off in the third or fourth quarter of fiscal year 2021. IARPA hopes to transfer the technology to other government agencies after the project is completed. Its customers include the CIA and other intelligence agencies, the U.S. military and the Department of Homeland Security.
Historically, about 70 percent of IARPA’s completed research successfully transitions to government partners, according to the agency.
No comments:
Post a Comment