The Counter-deception Blog

Examples of deceptions and descriptions of techniques to detect them. This Blog encourages the awareness of deception in daily life and discussion of practical means to spot probable deceptions. Send your examples of deception and counter-deception to colonel_stech@yahoo.com.

Thursday, September 15, 2005

 

Smudged Fingerprints [long]

Truth in reporting: I worked a decade ago with Stephen Meagher at the FBI on the IAFIS development. The "50K Study" is a nice test of the IAFIS, but hardly validation of expert judgments about partial and latent fingerprint identifications. A similar study could / should be done on partials and latents. And every case that hangs on expert testimony about partials and latents should be required to bring an "IAFIS confusion report" (all the prints IAFIS matched to the partial)  to court for cross-examination, as well as a double-bind confirmation, as described in the first article. Some progress on measuring the error rate seems to be happening. This is going to make the "either-or" stance of the experts less and less tenable. These's a "House of (Fingerprint) Cards" smell to all this.

Below are the recent article about legal challenges to finerprint expertise, Meagher's letter, and the original article.

How far should fingerprints be trusted?

  • 17 September 2005
  • NewScientist.com news service
  • Andy Coghlan
  • James Randerson

A HIGH-profile court case in Massachusetts is once again casting doubt on the claimed infallibility of fingerprint evidence. If the case succeeds it could open the door to numerous legal challenges.

The doubts follow cases in which the testimony of fingerprint examiners has turned out to be unreliable. The most high-profile mistake involved Brandon Mayfield, a Portland lawyer, who was incorrectly identified from crime scene prints taken at one of the Madrid terrorist bombings on 11 March 2004. Despite three FBI examiners plus an external expert agreeing on the identification, Spanish authorities eventually matched the prints to an Algerian.

Likewise, Stephan Cowans served six years in a Massachusetts prison for shooting a police officer before being released last year after the fingerprint evidence on which he had been convicted was trumped by DNA.

No one disputes that fingerprinting is a valuable and generally reliable police tool, but despite more than a century of use, fingerprinting has never been scientifically validated. This is significant because of the criteria governing the admission of scientific evidence in the US courts.

The so-called Daubert ruling introduced by the Supreme Court in 1993 set out five criteria for admitting expert testimony. One is that forensic techniques must have a known error rate, something that has never been established for fingerprinting.

The reliability of fingerprinting is at the centre of an appeal which opened earlier this month at the Massachusetts Supreme Court in Boston. Defence lawyers acting for Terry Patterson, who was convicted of murdering an off-duty policeman in 1993, have launched a so-called "interlocutory" appeal midway through the case itself to test the admissibility of fingerprinting. Patterson's conviction relies heavily on prints found on a door of the vehicle in which the victim died.

A key submission to the appeal court is a dossier signed by 16 leading fingerprint sceptics, citing numerous reasons for challenging the US Department of Justice's long-standing contention that fingerprint evidence has a "zero error rate", and so is beyond legal dispute. Indeed, fingerprint examiners have to give all-or-nothing judgements. The International Association for Identification, the oldest and largest professional forensic association in the world, states in a 1979 resolution that any expert giving "testimony of possible, probable or likely [fingerprint] identification shall be deemed to be engaged in conduct unbecoming".

Material in the dossier includes correspondence sent to New Scientist in 2004 by Stephen Meagher of the FBI's Latent Fingerprint Section in Quantico, Virginia, author of a pivotal but highly controversial study backing fingerprinting. The so-called "50K study" took a set of 50,000 pre-existing images of fingerprints and compared each one electronically against the whole of the data set, producing a grand total of 2.5 billion comparisons. It concluded that the chances of each image being mistaken for any of the other 49,999 images were vanishingly small, at 1 in 1097.

But Meagher's study continues to be severely criticised. Critics say that showing an image is more like itself than other similar images is irrelevant. The study does not mimic what happens in real life, where messy, partial prints from a crime scene are compared with inked archive prints of known criminals.

When New Scientist highlighted these issues in 2004 (31 January 2004, p 6), Meagher's response to our questions arrived too late for publication. He wrote that critics misunderstood the purpose of his study, which sought to establish that individual fingerprints are effectively unique - unlike any other person's print. "This is not a study on error rate, or an effort to demonstrate what constitutes an identification," he wrote (the letter can be read at www.newscientist.com/article.ns?id=dn7983). By the time New Scientist went to press, the FBI had not responded to our requests for comment.

But critics of fingerprinting have seized on this admission and included it in the dossier as evidence that the 50K study doesn't back up the infallibility of fingerprinting. "It shows that the author of the study says it doesn't have anything to do with reliability," says Simon Cole, a criminologist at the University of California, Irvine and one of the 16 co-signatories of the dossier.

Cole says that Meagher's replies to New Scientist demolish claims by the courts, the FBI and prosecution lawyers that the 50K study is evidence of infallibility. He says the letter has already helped to undermine fingerprint evidence in a recent case in New Hampshire.

Whatever the decision in the Patterson case, the pressure is building for fingerprinting's error rate to be scientifically established.

One unpublished study may go some way to answering the critics. It documents the results of exercises in which 92 students with at least one year's training had to match archive and mock "crime scene" prints. Only two out of 5861 of these comparisons were incorrect, an error rate of 0.034 per cent. Kasey Wertheim, a private consultant who co-authored the study, told New Scientist that the results have been submitted for publication.

But evidence from qualified fingerprint examiners suggests a higher error rate. These are the results of proficiency tests cited by Cole in the Journal of Criminal Law & Criminology (vol 93, p 985). From these he estimates that false matches occurred at a rate of 0.8 per cent on average, and in one year were as high as 4.4 per cent. Even if the lower figure is correct, this would equate to 1900 mistaken fingerprint matches in the US in 2002 alone.

Examiners: objectivity called into question

Fingerprint examiners can be heavily influenced by external factors when making judgements, according to research in which examiners were duped into thinking matching prints actually came from different people.

The study, by Itiel Dror and Ailsa Péron at the University of Southampton, UK, suggests that subjective bias can creep into situations in which a match between two prints is ambiguous. So influential can this bias be that experts may contradict evidence they have previously given in court. "I think it's pretty damning," says Simon Cole, a critic of fingerprint evidence at the University of California, Irvine.

Dror and Péron arranged for five fingerprint examiners to determine whether a "latent" print matched an inked exemplar obtained from a suspect. A latent print is an impression left at a crime scene and visualised by a technique such a dusting. The examiners were also told by a colleague that these prints were the same ones that had notoriously and incorrectly been matched by FBI fingerprint examiners last year in the investigation into the Madrid bombings. That mismatch led to Portland lawyer Brandon Mayfield being incorrectly identified as one of the bombers.

What the three examiners didn't know was that the prints were not from the bombing case at all. Each pair of prints, different for each examiner, had previously been presented in court by that same expert as a definite match.

Yet in the experiment only one of the experts correctly deemed their pair as matches. "The other four participants changed their identification decision from the original decision they themselves had made five years earlier," says Dror. Three claimed the pair were a definite mismatch, while the fourth said there was insufficient information to make a definite decision. Dror will present the results at the Biometrics 2005 conference in London next month.

One solution, says Cole, might be for each forensics lab to have an independent official who distributes evidence anonymously to the forensic scientists. This would help to rule out any external case-related influences by forcing the scientists to work in isolation, knowing no more about each case than is necessary. At the moment fingerprint examiners asked to verify decisions made by their colleagues do not receive the evidence "blind". They already know the decision colleagues have made.

Paul Chamberlain, a fingerprint examiner with the UK Forensic Science Service who has more than 23 years experience, says: "The FSS was aware of the need for a more robust scientific approach for fingerprint comparison." But he questions the relevance of the expert study. "The bias is unusual and it is, in effect, an artificial scenario," he says.

Duncan Graham-Rowe

The letter reproduced below was received on 28th January 2004 in response to questions put by New Scientist journalist James Randerson to the FBI regarding a study into the reliability of fingerprint evidence (the so-called 50K study).

The response is from the study’s author Steven Meagher and was received via Paul Bresson at the FBI’s press office. We were unable to incorporate the comments into the original story, as they arrived after the magazine had gone to press.

The letter has now been used in two court cases in the US to challenge the prosecution’s interpretation of what the 50K study shows.

Mr Randerson,

The criticism of the 50K Fingerprint Study set forth in your communication reflects that the source of the criticism is ill-informed about the details of the study and has drawn inappropriate assumptions and conclusions for what the study was intended to accomplish.

First, let me state what the study is not about and that may assist in clarifying some of the criticism. This is not a study on error rate or an effort to demonstrate what constitutes an identification. Because the referenced criticism in your communication doesn’t even accurately reflect the basic purpose of the study, then all following criticisms are logically inappropriate as well.

The study was much more simple. The study was an effort to state how different is each fingerprint from all others.

The test was designed to try and find any friction ridge arrangement of a fingerprint that was the same as someone else’s fingerprint arrangement. The purpose was to try and falsify that each individual’s fingerprint is unique. The test was intentionally not designed to search one fingerprint from a person and try and find a second fingerprint from that same person amongst the 50,000. That is a study that has been performed many times before and did not need to be studied again.

The methodology used to accomplish this study was to use Automated Fingerprint Identification System technology. Unfortunately, this technology is not 100% accurate (and requires assumptions and forces limitations that fingerprint experts do not have) but it is the best available to handle large numbers of comparisons quickly. This study required 2.5 billion comparisons for each of two phases (the full fingerprint and a partial fingerprint). It is this number that needs to be looked at for significance, not the 50,000 fingerprints. On another note, the assumption that the 50,000 fingerprints represents only 5,000 individuals is also inaccurate. The test utilized 50,000 left sloped loop patterned fingerprints which reflects 14,827 different individuals.

If one wanted to design a test to what the source of your criticism is suggesting, i.e., to try and find a fingerprints mate amongst the 50,000 which is different from the search print, then the only conclusion to be drawn is what is the error rate of that specific AFIS matcher algorithm. Even this kind of test doesn’t answer the question of error rate for fingerprint identifications. You can utilize 6 different AFIS matcher algorithms and you will get 6 different answers that only reflect the error rate for that specific matcher, not for the fingerprint identification process as performed by experts.

Steven Meagher, FBI Latent Fingerprint Section, Quantico, Virginia, US.

Printed on Thu Sep 15 19:39:52 BST 2005
----------------------------

Investigation: Forensic evidence in the dock

  • 19:00 28 January 2004
  • Exclusive from New Scientist Print Edition
  • James Randerson and Andy Coghlan

The UK has been troubled this past week by revelations that flawed scientific advice given to courts may have led to the wrongful conviction of hundreds of men and women accused of harming their children.

More than 250 infant death convictions, and potentially thousands of child abuse cases, are to be reviewed after judges decided that the cases may have relied too heavily on controversial and conflicting medical theories.

However, a New Scientist investigation has discovered that other, potentially flawed, forensic assumptions are still routinely being accepted by the courts. One such assumption is the supposed infallibility of fingerprint evidence, which has been used to convict countless people over the past century.

Contrary to what is generally thought, there is little scientific basis for assuming that any two supposedly identical fingerprints unequivocally come from the same person. Indeed, according to a report published in December, the only major research explicitly commissioned to validate the technique is based on flawed assumptions and an incorrect use of statistics. The research has never been openly peer reviewed.

This month, the US government also published a set of funding guidelines that rules out further studies to validate both fingerprint evidence and other existing forensic techniques presented as evidence in court. In 2003, a proposal by the US National Academies to validate such techniques collapsed after the Department of Defense and Department of Justice demanded control over who should see the results of any investigation.

Getaway car

Doubts over the reliability of fingerprint evidence were first raised in the US courts in 1999. Lawyers for Byron Mitchell, a defendant named in a robbery case, contested the admissibility of partial fingerprints found on the getaway car, which supposedly matched prints taken from Mitchell. The lawyers asked for a "Daubert hearing" - a special hearing in which judges decide for themselves the scientific validity and reliability of any forensic evidence before it is submitted.

To make a decision, judges apply five Daubert criteria to the evidence, one of which requires the technique to have a defined error rate. No such error rate existed for matching fingerprints. So the justice department commissioned the FBI and Lockheed Martin, which set up the bureau's fingerprint database, to establish one.

Only a summary of the study has ever been made available to the public. It says that there is a 1 in 1097 chance that one fingerprint image could be erroneously matched to another. Because only around 1011 human fingerprints have ever existed, this implies the probability of any false match is effectively zero.

But a number of academic critiques of this study argue that it contains blatant methodological errors. The investigators took a set of 50,000 pre-existing images of fingerprints and made comparisons of each one against itself and all the others. Although this produced an impressive-sounding 2.5 billion comparisons, critics point out that it is hardly surprising that a specific image should turn out to be more like itself than 49,999 other images.

In real investigations, the comparison being made is quite different: forensic investigators have to match new fingerprints taken from the scene of the crime against stored fingerprints. But the study was not designed to test the match between two or more different prints of the same finger, or the likelihood that they are more similar to each other than to prints from any other person.

"Absurd guess"

James Wayman, director of the US National Biometric Test Center at San José State University in California, also claims the sample size was too small to justify its conclusions.

"The government is comfortable with predicting the fingerprints of the entire history and future of mankind from a sample of 50,000 images, which could have come from as few as 5000 people," he argues. He has dismissed the 1097 figure cited by the FBI as an "absurd guess".

Neither the FBI, Department of Justice or Lockeed Martin were able to comment on the issue before New Scientist went to press.

The study does, however, provide some disturbing hints about the reliability of the more realistic comparison between different prints from the same finger. According to research published in December 2003 by David Kaye, a statistician at Arizona State University in Tempe (International Statistical Review, vol 71, p 521), the Lockheed investigators discovered three instances in which two supposedly different fingerprint images from the 50,000 looked unusually alike. By checking back, they found that each pair of prints was actually different images of the same person's finger.

The investigators excluded these from the analysis as mistakes. But despite each pair representing two images of the same print, one pair was found to be just as dissimilar as prints from different people. Two other pairs were also more dissimilar that they should have been.

"What it revealed was that prints from the same person seemed quite different," says Kaye. "They falsified the premise they were trying to prove," adds Simon Cole of the University of California at Irvine, an outspoken critic of the way fingerprint evidence is used.

Privileged position

No one is arguing that fingerprint evidence has no value. But because it is such a long-established technique, its critics say, it has never been subjected to the rigorous scientific scrutiny necessary to work out how often a bogus match is likely to come up.

What is more, fingerprint examiners occupy a privileged position not enjoyed by most experts. They routinely testify that a print left at the scene of a crime is a definite match to a suspect, with no possibility of error. Indeed the Scientific Working Group on Friction Ridge Analysis, Study and Technology (SWGFAST), which approves standards for fingerprint analysis techniques in North America, has stated: "[Fingerprint] identifications are absolute conclusions. Probable, possible or likely identification are outside the acceptable limits of the science."

But critics such as Cole argue that true science inevitably involves dealing with uncertainty. "The idea that there is something about fingerprints that is fundamentally different from any other area of human knowledge concerns me," says Jim Fraser, president of the UK's Forensic Science Society. "There have to be errors. It is a human process."

Since 1998 there have been legal challenges to at least 40 convictions in the US and UK on the basis of fingerprint evidence, including one last week in the Massachusetts Superior Court. Yet despite this, and the concerns of some experts, the US Department of Justice has so far refused to sanction studies to investigate the reliability of this and other "existing" forensic techniques.

In 1998, the department's research arm, the National Institute of Justice (NIJ), asked for proposals to validate fingerprints. According to Cole, it received four proposals, though none was funded. However, this year's "solicitation" form to attract research proposals, published on 6 January, states that "proposals to evaluate, validate or implement existing forensic technologies ... will not be funded".

Solid footing

In 2003, the US National Academies proposed a research programme to examine the scientific credibility of all existing forensic techniques, from fingerprinting and hair analysis to ballistics and lie detection. But the programme, funded by the DoD and NIJ, fell apart when the sponsors made what were seen by the academics as unreasonable demands to control dissemination and review of the material.

"I think it's censorship," says Paul Giannelli, law professor at Case Western Reserve University in Cleveland, Ohio. He believes US law enforcement authorities should ensure that all forensic techniques are placed on a solid scientific footing, even if that leads to difficulties in the short term.

Anne-Marie Mazza, director of the National Academies' Science, Technology and Law programme, say she is now looking for alternative funding for the project. "I think forensic science should become part of mainstream academic science. It should be peer reviewed, and open science communication is not something to be feared," she says. "Let's be honest, most of these techniques are solid, but there's nothing wrong with trying to find out how solid."

Others put the case for investigation more bluntly. "Various efforts to subject scientific evidence in criminal cases to a rigorous standard of scrutiny have made little progress," says Joe Cecil, a researcher at the Federal Judicial Center in Washington DC.

However, one observer, who asked not to be named, put the government's reticence in a different light. "If all of a sudden, all forensic science is doubted, what happens to all the people in jail. The whole criminal justice system could fall apart."

Printed on Thu Sep 15 19:41:09 BST 2005

Archives

September 2004   October 2004   November 2004   December 2004   February 2005   April 2005   July 2005   August 2005   September 2005   October 2005   November 2005   December 2005   January 2006   February 2006   March 2011   June 2011   August 2011   September 2011   May 2012   February 2017   June 2019   August 2020  

This page is powered by Blogger. Isn't yours?