Pages

28 October 2022

Looking for Lies An Exploratory Analysis for Automated Detection of Deception

Marek N. Posard, Christian Johnson, Julia L. Melin

Security clearance investigations are onerous for both the applicants and investigators, and such investigations are expensive for the U.S. government. In this report, the authors present results from an exploratory analysis that tests automated tools for detecting when some of these applicants attempt to deceive the government during the interview portion of this process. How interviewees answer interview questions could be a useful signal to detect when they are trying to be deceptive.

Key Findings

Models that used word counts were the most accurate at predicting who was trying to be deceptive versus those who were truthful.

The authors found similar differences in accuracy rates for detecting deception when interviews were conducted over video teleconferencing and text-based chat.

ML transcription is generally correct, but there can be errors, and the ML methods often miss subtle features of informal speech.

Although models that used word counts produced the highest accuracy rates for all participants, there was evidence that these models were more accurate for men than women.

Recommendations

The federal government should test ML modeling of interview data that uses word counts to identify attempts at deception.

The federal government should test alternatives to the in-person security clearance interview method—including video teleconferencing and chat-based modes—for certain cases.

The federal government should test the use of asynchronous interviews via text-based chat to augment existing interview techniques. The data from an in-person (or in-person virtual) interview and chat could help investigators identify topics of concern that merit further investigation.

The federal government should use ML tools to augment existing investigation processes by conducting additional analysis on pilot data, but it should not replace existing techniques with these tools until they are sufficiently validated.

The federal government should validate any ML models that it uses for security clearance investigations to limit the bias in accuracy rates on the basis of the ascribed characteristics (e.g., race, gender, age) of interviewees.

The federal government should have a human in the loop to continuously calibrate any ML models used to detect deception during the security clearance investigation process.

No comments:

Post a Comment