A test conducted by the ACLU (American Civil Rights Union) had a rather disturbing outcome, even in the non-ideal circumstances in which it was carried out. The association used the Amazon facial recognition platform called Rekognition to analyze the faces of the 435 members of the US Congress. The result: 28 of them were identified as criminals wanted by the police.
The ACLU’s idea was to show how such a system could be dangerous, that is, how reliance on facial recognition platforms with the use of artificial intelligence can lead to misidentifications and false accusations.
To make matters worse, the test done by the organization, 39% of the analyzed black people were considered criminal, while errors between whites were only 5%.
In defense of its system, Amazon pointed to some mistakes made by the ACLU tests: the first one concerns the margin of tolerance of recognition, which the organization set up at 80%, the indicated for justice cases is 95% or the system becomes more detailed. Another factor mentioned by Amazon is the need for, in police cases, that there is always an analysis done by a human being before any action is taken in relation to a suspect meeting.
In any case, it is not a matter of accusing the ACLU of having cheated on the test to discredit the facial recognition system, after all there is no law in place to regulate anything about this practice which is, of course, extremely recent. What the association wants to show is that the practice can be potentially very dangerous, blaming innocent people wrongly.