Americas

  • United States

Asia

mike_elgan
Contributing Columnist

Tell the truth: Do you want AI lie detectors in the workplace?

opinion
Jul 31, 20246 mins
Generative AITechnology Industry

The question is controversial and raises a host of questions, such as: do AI lie detectors even work?

lies that people tell themselves pinocchio liar lying by malerapaso getty
Credit: malerapaso / Getty Images

CVS, the pharmacy company, recently settled a class-action lawsuit in which the company  faced a complaint it used AI lie detector tests during job interviews — and didn’t inform prospective employees about what it was doing. (The terms of the settlement were not disclosed.)

According to the allegation, CVS used HireVue video interview technology and Affectiva’s AI analysis product

HireVue is an AI HR product designed to provide benefits both to prospective hires and the companies doing the hiring.

The two benefits for would-be employees, according to the company, are that AI can reduce bias in hiring (although the use of facial analysis was removed in 2020 over concerns about potential bias), and interview questions can be established in advance, then the candidate can do the interview with AI at any time. (The software also facilitates real-time interviews with real people.)

The platform integrates with HR systems and tools, including Microsoft Teams, LinkedIn, and Salesforce. 

HireVue claims that its product enables teams to make better hires much faster. Videos are recorded, and AI analyzes them to rate verbal and non-verbal cues such as word choice, tone of voice, and facial expressions.

The AI features, developed by a team of data scientists and industrial-organizational psychologists, are a trade secret, but it’s some kind of machine learning trained on interviews and follow-ups to find out which employees worked out.

Affectiva’s Emotion AI technology, integrated with HireVue’s video interview platform to enhance HireVue’s AI, was designed to track and interpret various facial expressions such as smiles, surprise, contempt, disgust, and smirks. This analysis contributed to generating an “employability score” for each candidate, which included assessments of traits like “conscientiousness and responsibility” and an “innate sense of integrity and honor,” according to the lawsuit, which claimed the software amounted to a “lie detector.”

Europe is planning a lie detector test for entry

The European Union is planning to use an AI lie detector system called iBorderCtrl for travelers entering EU countries.

iBorderCtrl is an AI software tool designed to analyze facial movements and body language to flag suspicious behavior to immigration officers. Critics argue that the system will discriminate against people with disabilities or anxious personalities, but it could be implemented as early as Oct. 6, 2024.

Can AI really tell whether someone is lying?

It’s not clear whether the allegation against CVS involved actual lie detection, and even less clear whether the software can actually judge integrity and other qualities in a person. 

But AI can actually do a pretty good job with lie detection, according to a recent study by Alicia von Schenk and her colleagues at the University of Würzburg in Germany.

The researchers developed an AI tool trained using Google’s BERT language model on a dataset of 1,536 statements about weekend plans, with half being “incentivized lies.” The AI system successfully identified true and false statements 67% of the time (which is better than humans, who typically achieve only about 50% accuracy).

Researchers claim such a tool could be used for identifying fake news and disinformation on social media, detecting exaggerations or lies in job applications and interviews, and other uses.

Another research group from the IMT School of Advanced Studies Lucca and the University of Padua developed generative AI (genAI) that can identify lies in written texts with an accuracy level of 80%, according to the researchers. 

With the rise in publicly available genAI tools, we can expect research in the area of AI lie detection to grow. 

The trouble with AI lie detection

You can see the problem with AI lie detection very clearly in these numbers. Assuming the University of Würzburg AI is roughly as accurate as the AI used in hiring or border patrol, you can see how it might improve hiring — and ruin border patrol. 

With hiring, the majority of candidates are rejected, and one is selected, normally with human judgement. If AI judges better, a company might make better hires, on average. 

But with border agents, all people coming in are accepted, unless agents find some reason to reject. If AI is throwing up red flags and false positives at scale based on a 67% success rate, many otherwise acceptable travelers might get turned away.

The European plan is already subject to lawsuits, and I would be surprised if it ends up being rolled out for now. 

It’s also safe to assume that AI lie detection software might get much, much better. And then what? Such a tool might prove invaluable in war for interrogating prisoners, and also for espionage purposes. It could be great for finding moles and double agents in spy organizations, and for finding and identifying terrorists. 

Someday AI lie detection might become standard in hiring, and even applied to everyday office communications, including emails, texts and other communications tools. 

Personally, I would love to see much better AI applied to social media posts, so that disinformation could be flagged by default. Given sufficiently good lie detection software applied at scale on social networks, social networks could go from the least reliable to the most reliable sources of information.

And there are a host of applications for tomorrow’s high-quality AI lie detection — including as a feature of AI glasses. (Imagine seeing a literal red flag in your glasses when the person in front of you is telling a lie.)

But I doubt lie detection software will ever be accepted in an office environment for ordinary everyday employees. It’s far too intrusive, creepy and problematic. And it greases the slippery slope down the path of AI providing the role of validating humans. If AI is basically determining our productivity, our integrity and other factors, then the people are basically working at the pleasure of the machines — the stuff of dystopian science fiction. 

In the meantime, we’re going to see lots of headlines over the next few years about AI lie detection — and more of what we’ve seen this month: lawsuits, research and huge plans for implementation in security settings. 

I’m not going to lie — the whole trend is both fascinating and horrible.