Here’s a facial-recognition algorithm that critics say shouldn’t be taken at face value.
On Tuesday, a group of more than 1,000 tech professionals — including artificial intelligence, machine learning, law and anthropology researchers — published a public letter bashing a forthcoming paper detailing the development of a facial-recognition program that claimed to predict if someone would be a criminal.
The issue? The letter’s many signers agreed that criminality can’t be predicted without prejudice, despite the report’s claim of “80% accuracy and with no racial bias,” and compared the method to long debunked “race science.”
The paper — by two professors and a graduate student at Harrisburg University in Pennsylvania — was set be published by Springer Nature in an upcoming collection, Wired reports.
However, a May press release from the university teasing its publication has since been deleted — and Springer Nature tweeted that it won’t publish the paper.
In a followup statement to The Post, reps for Springer Nature say, “We acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication. It was submitted to a forthcoming conference for which Springer will publish the proceedings of in the book series Transactions on Computational Science and Computational Intelligence and went through a thorough peer review process. The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June.”
The 1,000-plus group of people who signed the letter, collectively calling themselves the Coalition for Critical Technology, say the Harrisburg University study has claims that “are based on unsound scientific premises, research, and methods which . . . have [been] debunked over the years.” They add that, due to racial biases in US policing, any new algorithm purporting to predict criminality inevitably begets those systemic biases.
This isn’t the first time a facial-recognition program for criminality has caused alarm. In June, Amazon banned police from using its facial-recognition software, Rekognition, for a year in an effort for Congress to regulate the technology. Studies have shown that Rekognition misidentifies African-American and Asian people more frequently than whites. And in late 2019, a US government-led study concluded that facial-recognition programs misidentify people of color more often — with “demographic differentials,” making people of color more vulnerable, in part, to false accusations.
“Crime is one of the most prominent issues in modern society,” Harrisburg Ph.D. student Jonathan W. Korn — a former New York police officer — said in the since-deleted press release, Wired reports. “The development of machines that are capable of performing cognitive tasks, such as identifying the criminality of [a] person from their facial image, will enable a significant advantage for law enforcement agencies and other intelligence agencies to prevent crime from occurring in their designated areas.”
Korn and another paper co-author, Nathaniel Ashby, didn’t respond to Wired’s requests for comment. Springer Nature also didn’t respond to their request for comment.