The ACLU and several other public interest groups have filed a brief with the Florida Supreme Court urging them to look into the use of automated facial recognition in criminal proceedings.
In 2015, two undercover cops purchased $50 of crack cocaine from a Black man on a Jacksonville street. Instead of arresting him on the spot, one officer used an old phone to snap several photos of him. Trying to be discrete, the officer took the photos while holding the phone to his ear and pretending to be on a call. Needless to say, the resulting photos were not headshot quality.
Later, after failing to identify the man themselves, the officers emailed the photos to a crime analyst, who used a statewide face recognition system to see if one of the photos looked like any mugshots in the county’s database. The program spit out several possible matches, the first of which was Willie Lynch, who was soon arrested and charged with the drug sale.
The ACLU argues that this is a problem:
If the government uses an error-prone face recognition system to identify you as the perpetrator of a crime, you have a constitutional right to probe its accuracy before you are convicted based on its results. But amazingly, a Florida appeals court disagrees.
If a cop stops you and makes you breath into an Intoxilyzer 9000, the results issued by that machine can be used in court as evidence of your blood alcohol content. On the other hand, if the cop whips out a cheap personal-use portable breathalyzer and you blow into it, the results aren’t admissible in court because the device hasn’t been approved for evidential use. (This is a simplification, I’m not a lawyer, don’t blame me if you get convicted, etc.) However, the cop can use the results of that non-evidential test to decide if he wants to go through the trouble of getting you to take an official test.
I can’t quite tell from the descriptions of the Florida case whether the FACES facial recognition system was used more like the Intoxilyzer 9000 or the cheap portable breath tester. That is, were the results of the facial recognition program entered into evidence — “We know it was the defendant because the facial recognition software says so” — or did the prosecutor have the cops make the ID — “We recognize this guy as the individual we bought the drugs from” — with the facial recognition software being the tool they used to find his identity (name, address, etc.) so they could visit him and identify him as the drug dealer themselves?
If the former, if facial recognition was used as evidence, then I think it would be pretty clear that the technology can be challenged by the defense, as breathalyzer evidence often is. Given that the county’s use of facial recognition software came as a surprise to many criminal defense lawyers in the area, I suspect that it was standard practice for the prosecutor to not reveal that law enforcement agencies were regularly using facial recognition technology in investigations.
Even if the facial recognition results were not introduced as evidence, the ACLU argues they should have been made available to the defense as Brady material:
If any of this information had come from a human witness—or, in the case of the analyst’s suggestive submission to the investigators, a lineup—it would clearly be Brady material. For example, FACES identified Mr. Lynch and several other people with similar confidence as the perpetrator. Had an eyewitness done so, the state would be unquestionably obligated to disclose the identification of the alternate suspects.
[citations omitted]
That makes a lot of sense to me. There is evidence that a witness’s ability to accurately identify a suspect by looking a photo is impaired if the photo is presented improperly. Showing only one photo is highly suggestive, and the image in the photo is likely to supplant the witness’s actual memory. So the defense needs to know the details of how the defendant was identified if they are to put on an effective defense.
That said, as a software engineer, I’m uncomfortable with how far the ACLU wants to take this:
Prosecutorial misconduct and police adoption of face recognition technology are dangerous, and the ACLU has been pushing to halt both. Until that happens, prosecutors must give defendants full access to information about the algorithm used against them in places where face recognition technology has already been deployed. This includes the underlying model, training data, computer code, explanatory documentation, and any other results from which the final, reported result was chosen. Any validation studies should also be available as well as the opportunity to question the people who use and created the system.
Where I come from, turning over the source code to a software product is huge. The intellectual property value of even a moderately-sized software system could run into the millions of dollars. And once the source code gets into a courtroom, it’s not unheard of for it to be released to the public:
In July 2016, Judge Valerie Caproni of the Southern District of New York determined in U.S. v. Johnson that the source code of the Forensic Statistical Tool, a genotyping software, “is ‘relevant … [and] admissible’” at least during a Daubert hearing—a pretrial hearing where the admissibility of expert testimony is challenged. Caproni provided a protective order at that time.
This week, Caproni lifted that order after the investigative journalism organization ProPublica filed a motion arguing that there was a public interest in the code.
If anyone wants to know what that source code looks like, it’s available here. As I write this, people have made 49 additional public copies. It’s copyrighted, of course, but I have no idea what the limits are on information released through the courts like this.
In addition, I’m not real thrilled about the idea that defense lawyers could routinely subpoena me every time some police agency uses something I wrote.
That said, I can certainly see where defense lawyers are coming from. When you’re fighting a DUI charge where an Intoxilyzer 5000 was used, it makes sense to want to know how an Intoxilyzer 5000 works, and that includes information about the source code for its software. That same thinking applies if your client was fingered by a FACES hit. It just kind of makes sense that software created for use by law enforcement should be subject to discovery and examination as part of any criminal proceeding resulting from its use.
But what about general purpose software? What if the forensic analyst used Photoshop to clean up the phone image before sending it to the facial recognition system? If lawyers can get the software for the the facial recognition system, can they also force Adobe to turn over Photoshop source code that cost them hundreds of millions of dollars to develop? That seems insane.
And what happens next time? Is there some sort of precedent established? Or does the software owner have to keep giving it up in court? And what about maintenance? I could see a defense lawyer demanding to examine the FACES source code, only to have the prosecution argue that the court had found the FACES software acceptable in a previous case, to which the defense lawyer responds, “That was version 3.5.4, this case is about version 3.6.2.”
I sort of assume that the legal system has some of this figured out already, because we’ve been using software systems for a long time, and the issue doesn’t seem to come up often. When prosecutors want call detail records for a phone admitted into evidence, the defense doesn’t usually get to examine the phone company’s billing software that was used to produce the record. Or in the case discussed above, nobody seems to be asking the cellphone manufacturer for source code for the software running on the phone that created the image.
In 2012, a defendant in a cold case was prosecuted on a DNA hit that was characterized as “1.62 quintillion times more probable than a coincidental match to an unrelated black person.” The defense wanted to make the state prove that those insanely high odds were accurate, so they requested the source code to the DNA matching algorithm.
The trial court determined that Chubbs was entitled to examine the source code under protective order.
This decision was overturned on appeal in 2015. The appeals court said that Chubbs’ stated reasons to access the source code, even under protective order, did not outweigh trade secret protections. Further, as the court writes, “access to TrueAllele’s source code is not necessary to judge the software’s reliability,” because validation studies and expert testimony are sufficient to make that determination.
That sounds like a reasonable conclusion. I’m a software engineer, and if you asked me to evaluate the accuracy of a piece of scientific software, I would be far more interested in the testing methodology than the source code. After all, the reason the software industry does so much testing is because you can’t judge the quality of software by code inspection alone.
Still, it’s not as if DNA software is perfect. I seem to recall (I can’t find a link) that at least one DNA matching program was found to have inaccurately coded population statistics, causing it to miscalculate the odds of a random match. That is exactly the sort of thing the defense team would like to discover, because it’s also exactly the sort of thing that gets innocent people convicted.
Leave a ReplyCancel reply