An article analyzing Amazon’s public statements on the accuracy of facial recognition technology claims that the company “has obfuscated the very real risks” associated with it.
The Project on Government Oversight (POGO) article presents a timeline of Amazon’s guidance, starting with a blog post describing a use case with the confidence threshold set at 85 percent in June, 2017. Following an experiment by the ACLU which provocatively matched black members of Congress with criminal mugshots using Rekognition, the company noted that it guides customers to use a confidence threshold of at least 95 percent, and that it recommends 99 percent for the kind of comparison performed by the ACLU. When MIT researcher Joy Buolamwini presented research indicating possible racial and gender bias issues in the technology, Amazon refuted the findings and reasserted the recommendation confidence threshold setting of 99 percent.
The Washington County Sheriff’s Office, one of the few law enforcement agencies known to be using Rekognition, then admitted it does not use a confidence threshold, and a pair of AWS representatives expressed support for their approach on Twitter in January, but the company again reiterated the importance of the 99 percent confidence threshold when revealing its guidelines for responsible use. The timeline presented by POGO concludes with a letter written by researchers to Amazon urging it to stop selling Rekognition to law enforcement, and criticizing its responses to previous criticism and third-party tests.
The article discusses the nature and importance of confidence thresholds, and notes that results are generally displayed as a limited candidate list, and unrestricted candidate list, or a single match response.
“If law enforcement entities are going to use facial recognition, giving law enforcement, lawmakers, and the public a clear picture of how well (or poorly) the technology works is essential for both civil rights and civil liberties, as well as public safety,” concludes Jake Laperruque, senior counsel of POGO’s The Constitution Project.
Alexa’s annotators under-appreciated
The contrast between artificial intelligence’s hype and reality have been exposed by the news that a large team of Amazon employees is dedicated to listening in on people’s interactions with its digital personal assistant, according to an opinion piece written by Russian journalist Leonid Bershidsky for Bloomberg.
Bershidsky points out that even robust AI speech recognition systems must constantly be annotated to keep up with slang, shifts in accents, and cultural phenomena. Saying so has less marketing ring to it than phrases like “deep learning,” however. He also points out that the access given to Amazon’s annotating employees is similar to that of Amazon Ring employees who The Intercept reported earlier this year were watching footage obtained both outside and inside of client’s homes to annotate videos. Amazon’s response in both cases was that it has “zero tolerance for abuse of our systems,” Bershidsky writes.
The proper response, in the journalist’s opinion, is for companies to be open about the human role in AI. This would demystify AI and could negatively impact some products, he says, but a market would still exist, and without its “dirty secret.”
Hopefully, increasing attention on specific aspects of AI such as how confidence thresholds and the annotation process work will improve the maturity of social dialogue about the technology and its applications.