A new paper by Greg Mitchell and I, forthcoming next year in Penn. L. Rev., and available as of today on SSRN here. The abstract is below and our thesis can be summarized very briefly: expertise = proficiency.
Expert evidence plays a crucial role in civil and criminal litigation. Changes in the rules concerning expert admissibility, following the Supreme Court’s Daubert ruling, strengthened judicial review of the reliability and the validity of an expert’s methods. However, judges and scholars have neglected the threshold question for expert evidence: whether a person should be qualified as an expert in the first place. Judges traditionally focus on credentials or experience when qualifying experts without regard to whether those criteria are good proxies for true expertise. We argue that credentials and experience are often poor proxies for proficiency. Qualification of an expert presumes that the witness can perform in a particular domain with a proficiency that non-experts cannot achieve, yet many experts cannot provide empirical evidence that they do in fact perform at high levels of proficiency. To demonstrate the importance of proficiency data, we collect and analyze two decades of proficiency testing of latent fingerprint examiners. In this important domain, we found surprisingly high rates of false positive identifications for the period 1995 to 2016. These data would falsify the claims of many fingerprint examiners regarding their near infallibility, but unfortunately, judges do not seek out such information. We survey the federal and state case law and show how judges typically accept expert credentials as a proxy for proficiency in lieu of direct proof of proficiency. Indeed, judges often reject parties’ attempts to obtain and introduce at trial empirical data on an expert’s actual proficiency. We argue that any expert who purports to give falsifiable opinions can be subjected to proficiency testing, and proficiency testing is the only objective means of assessing the accuracy and reliability of experts who rely on subjective judgments to formulate their opinions (so-called “black-box experts”). Judges should use proficiency data to make expert qualification decisions when the data is available, should demand proof of proficiency before qualifying black-box experts, and should admit at trial proficiency data for any qualified expert. We seek to revitalize the standard for qualifying experts: expertise should equal proficiency.