Identifying Discriminatory Attitudes through Artificial Intelligence-Based Lie Detection
Identifying Discriminatory Attitudes through Artificial Intelligence-Based Lie Detection
Nuha Mohammed Thomas Jefferson High School for Science and Technology
This article was originally included in the 2021 print publication of the Teknos Science Journal.
I walked through the cold grocery store aisles, feeling the warmth of my face mask. That day, I came home to news headlines reporting on George Floyd’s death. Questions started racing through my mind. Headline after headline, I tried piecing together what had occurred. The following day, the nation broke out in protests against racial inequality—African Americans were fighting for the civil justice they have long demanded. At the same time, our law enforcement system was questioned. Why was a police officer with more than 17 misconduct allegations able to put on a uniform and kill George Floyd? Why are police officers with racial prejudice, persistently against African Americans, not held accountable for misconduct?
These questions, I realized, are not straightforward. The answers are rooted in years of systemic racism, which have sowed the seeds for both implicit and explicit bias against African Americans. These questions prompted me to think about how police officers were tested for racist, misogynistic, or homophobic attitudes as part of the hiring process. Through my research in the Computer Systems Lab, I seek to address this issue and develop a more accurate deception-detection tool, which can potentially be used to screen for discriminatory attitudes.
Following the incident with George Floyd, law enforcement officials are considering using the polygraph to test for racism in candidates applying for a position in law enforcement [4]. The polygraph is one of the most prevalent methods of deception-detection. This tool analyzes potential indicators of lying such as pulse, blood pressure, and respiration while the person administering the test asks the subject a series of control and relevant questions—where relevant questions pertain to the incident in question [1]. “The important thing to realize is that most of these devices use a version of the controlled question test: I ask you your name, your age, your favorite food, and then I ask you whether you shot Abraham Lincoln at the Ford Theatre. If your answer to that question leads to a different physiological response, it may lead me to believe you’re lying.” [2].
These tests may be detecting changes from a baseline physiological response, but how does one know that the changes they are measuring are actually due to deception? This concern highlights the inherent subjectivity and variability in polygraph testing, which has led the U.S. Supreme Court to reject the polygraph as reliable evidence [8]. A more accurate deception-detection tool would benefit several groups including law enforcement, the justice system, and employers for various purposes such as interrogations and background checks. Particularly, to prevent police brutality and systemic racism, it is important to detect potential discriminatory attitudes.
Deception is a complex social action often unable to be fully understood through polygraph testing and can be caused by several motivations, such as gaining money or preserving self-respect. In a study conducted by Boston University researchers, subjects were told to flip coins; heads would earn them five dollars [3]. About 80% of candidates were dishonest, either flipping tails and reporting heads or simply not flipping and reporting heads. The researchers attributed this lying to “moral disengagement,” where people rationalize lying by thinking that everyone lies, so they should not disadvantage themselves by being honest.
Additionally, scientists have correlated deception with facial cues and behaviors such as eye movements and delayed response time [5]. However, these behavioral patterns differ from individual to individual and can be too complicated to be surveyed by humans alone. Scientists are increasingly relying on artificial intelligence (AI) to correlate behavioral patterns with deception and to individualize the deception-detection process.
AI algorithms known as machine learning models learn to predict patterns within data and apply them to predict outcomes for new, unknown scenarios. Krishnamurthy et al. (2018) trained a machine learning model on data consisting of microexpressions, vocal patterns from subjects’ audio clips, and a text transcript of the audio to detect deception in real courtroom trial proceedings. This detection system outperformed most state-of-the-art models with a 96.14% accuracy and AUC (Area Under the ROC Curve) of 0.9799. The researchers found that using a 3D Convolutional Neural Network machine learning model to extract visual features and combining input from multiple modalities (e.g. audio, video, and a text transcript) enhanced the algorithm’s performance significantly.
Furthermore, a study by Sen et al. (2018) has shown that whether a subject lies may not only be correlated with the subject’s facial expressions, but also with those of the interrogator. The researchers recorded 151 dyadic conversations using an online framework, where each conversation occurred between a unique interrogator and subject pair and was simulated as per research protocol. The data indicated that interrogators who were lied to in a two-sided conversation expressed a lip-corner puller expression (related to a smile) more often than the interrogators who were told the truth—demonstrating that certain behavioral variables in interrogators may be statistically significant in indicating deception for the subject in question.
AI offers immense potential to learn underlying patterns in facial expressions associated with lying. Having taken courses in AI, computer vision, and machine learning, I am interested in leveraging these concepts to create a more accurate deception-detection system. Specifically, I plan to incorporate multimodal analysis into my AI algorithm, combining audio and visual input to detect lies. Candidates can be asked questions inquiring about racial prejudices, and the system can potentially identify racial bias because of deceptive responses.
Although scientists are exploring AI for deception-detection, controversy still exists over the reliability of such detection systems. Some professors like Ewout Meijer, a psychology professor at Maastricht University, claim that AI algorithms are unstable and unreliable to assess the complexities involved in a lie [2]. Jake Bittle (personal communication, Feb. 17, 2021) holds a similar skepticism and says, “the human face is one of the most intricate muscular complexes that exist in nature, and trying to pin any of its movements onto an internal mental state is probably a mug’s game.” Furthermore, Bittle and Meijer express how AI can pose ethical concerns. If data is limited and not representative of the minority, the detection algorithm can misevaluate cases with racial and ethnic minorities. I hope that deception-detection algorithms using AI will be trained on larger and more diverse datasets in the future, creating systems that can fairly and ethically assess deception, while helping our society become more welcoming and safe for people of all backgrounds.
References
[1] American Psychology Association. (2004). The Truth About Lie Detectors (aka Polygraph Tests). In Research in Action. Retrieved January 31, 2021, from https://www.apa.org/research/action/polygraph
[2] Bittle, J. (2020). Lie detectors have always been suspect. AI has made the problem worse. MIT Technology Review. https://www.technologyreview.com/2020/03/13/905323/ai-lie-detectors-polygraph-silent-talker-iborderctrl-converus-neuroid/
[3] Carey, B. (2020, September 15). The Good, the Bad and the 'Radically Dishonest'. New York Times. https://www.nytimes.com/2020/09/15/science/psychology-dishonesty-lying-cheating.html
[4] Erickson, J. (2020, August 4). Police Accountability Act passes, will it weed out bad cops? Insight News. Retrieved February 17, 2021, from https://www.insightnews.com/news/metro/police-accountability-act-passes-will-it-weed-out-bad-cops/article_17188052-d6a9-11ea-8083-bb9856678523.html
[5] Gonzalez-Billandon, J., Aroyo, A. M., Tonelli, A., Pasquali, D., Sciutti, A., Gori, M., Sandini, G., & Rea, F. (2019). Can a Robot Catch You Lying? A Machine Learning System to Detect Lies During Interactions. Frontiers in Robotics and AI, 6. https://doi.org/10.3389/frobt.2019.00064
[6] Krishnamurthy, G., Majumder, N., Poria, S., & Cambria, E. (2018). A Deep Learning Approach for Multimodal Deception Detection. arXiv e-prints. https://arxiv.org/abs/1803.00344
[7] Sen, T., Hasan, M. K., Teicher, Z., & Hoque, M. E. (2018). Automated Dyadic Data Recorder (ADDR) Framework and Analysis of Facial Cues in Deceptive Communication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(4), 1-22. https://doi.org/10.1145/3161178
[8] United States v. Scheffer, 523 U.S. 303 (Mar. 31, 1998). https://supreme.justia.com/cases/federal/us/523/303/