0
Get Alert
Please Wait... Processing your request... Please Wait.
You must sign in to sign-up for alerts.

Please confirm that your email address is correct, so you can successfully receive this alert.

1
Column   |    
Law & Psychiatry: The New Lie Detectors: Neuroscience, Deception, and the Courts
Paul S. Appelbaum, M.D.
Psychiatric Services 2007; doi: 10.1176/appi.ps.58.4.460

This column examines the use of two technologies in lie detection. "Brain fingerprinting" is based on the finding that the brain generates a unique brain-wave pattern when a person encounters a familiar stimulus. Use of functional magnetic resonance imaging in lie detection derives from studies suggesting that persons asked to lie show different patterns of brain activity than they do when being truthful. Issues related to the use of such evidence in courts are discussed. The author concludes that neither approach is currently supported by enough data regarding its accuracy in detecting deception to warrant use in court. Psychiatric Services 58:460—462, 2007

Abstract Teaser
Figures in this Article

Dr. Appelbaum, who is editor of this column, is Dollard Professor of Psychiatry, Medicine, and Law and director of the Division of Psychiatry, Law, and Ethics, Department of Psychiatry, Columbia University College of Physicians and Surgeons. Address correspondence to him at the Department of Psychiatry, Columbia University Medical Center, 1051 Riverside Dr., Unit 122, New York, NY 10032 (e-mail: psa21@columbia.edu).

Given that one of the primary functions of the judicial process is the ascertainment of truth, a valid approach to identification of falsehoods would seem to have great utility for the courts. Thus newly developed approaches to lie detection that are based on recent advances in neuroscience are causing a stir in some legal circles (1). However, even if these techniques prove to be accurate in detecting falsehoods, the prospect of their application raises significant questions about the appropriate limits of intrusion on the privacy of thought.

Detecting whether witnesses are lying is, in some sense, what the adversarial process of trial is all about. Cross-examination probes the weaknesses and inconsistencies in witnesses' testimony, with the finder of fact—usually the jury, but sometimes a judge—serving as the ultimate arbiter of whose account is to be believed. The obvious inaccuracies in that procedure, despite its privileged position in Anglo-American law, have motivated a search for technological approaches to distinguishing truth from lies.

The best known of these instruments is the polygraph, which was developed in the early 20th century after it was observed that deception is frequently accompanied by physiological changes secondary to anxiety. Thus standard polygraphs simultaneously measure pulse, blood pressure, respiration, and galvanic skin response, all indicators of anxiety-induced autonomic arousal. Despite the promise of the polygraph, courts have been reluctant to admit such evidence. Indeed, the case that defined the federal rule for the admissibility of scientific evidence for most of the 20th century, Frye v. United States, upheld a challenge to the introduction of polygraph data in courts (2). Concerns about the accuracy of the technique, especially the ability of evaluees to "beat the test" by altering their autonomic reactivity, have resulted in the exclusion of polygraph results in all but the rarest of cases (1).

Skepticism about the polygraph extends beyond the courtroom. The Federal Employee Polygraph Protection Act of 1988 generally prohibits employers from compelling workers to undergo lie-detector examinations, although there are a number of exceptions. Some states also restrict employers' polygraph use. Ironically, the federal government, which is exempt from the provisions of the federal statute, is probably the most frequent user of polygraph examinations in the country, primarily in assessments of persons seeking jobs or security clearances. A report on polygraph use by the National Research Council concluded that the device yields results that are better than chance, but its modest accuracy and susceptibility to countermeasures mean that reliance on it should be limited, especially for the kinds of screening purposes favored by the federal government (3).

Newer efforts to find a technological means of identifying lies have capitalized on developments in brain electrophysiology and neuroimaging. In place of the polygraph's focus on peripheral autonomic activity, these newer technologies monitor changes in the pattern of activity in the brain itself. This column examines issues raised by two technologies that have been proposed as methods of lie detection.

The first of these approaches, given the perhaps fanciful moniker "brain fingerprinting," is based on the well-validated finding that the brain generates a unique brain-wave pattern when a person encounters a familiar stimulus. A so-called P300 wave is produced about 300 milliseconds after exposure to the stimulus—for example, a picture of a familiar place. The leading exponent of brain fingerprinting, Lawrence Farwell (4), has patented an algorithm for assessing brain-wave responses for this purpose and claims a high degree of accuracy. But his proprietary technique has never been subject to independent review, and the only published data are based on just a handful of individuals (4).

There are reasons to question both the utility of brain fingerprinting and related approaches and the problems that are likely to arise in their application. The model is based on the detection of a discrepancy between an evaluee's account and his or her unconscious response to the relevant information. For example, a defendant who denies that he was at the scene of the crime might be confronted with pictures or verbal descriptions of the crime location to see whether his brain-wave response displays indications of familiarity. In many, perhaps most, criminal cases, however, the verdict will turn on issues other than the defendant's familiarity with information about the crime, and in those instances brain-wave analysis—even if ultimately validated—will be of little use.

Moreover, unless performed skillfully, the assessment of what has been called the evaluee's "guilty knowledge" may lead to uninterpretable or misleading results. In what appears to be the only case in which brain-fingerprinting evidence was admitted at a judicial hearing, the informational probes were so general that the value of the entire evaluation, which was aimed at showing a convicted murderer's lack of familiarity with a crime scene, has been called into question (5,6). Indeed, even with very accurate models, there may be substantial limits on the accuracy of brain-wave techniques. As Wolpe and colleagues (5) have demonstrated, the probability of false-positive findings—that is, incorrect identification of truthful persons as liars—skyrockets as the base rate of prevarication in a population falls. Applied indiscriminately, even a relatively good test will yield more harm than benefit, because a majority of persons identified as deceptive will actually be telling the truth.

Functional magnetic resonance imaging (fMRI) is another technology that has been applied to the task of sifting truthful from untruthful responses. As the evaluee performs specific tasks, fMRI scanners measure localized brain activity by determining blood flow and oxygen utilization in portions of the brain (7). Changes in activity associated with a given task presumably indicate involvement of that brain region in the effort. Attempts to apply fMRI to uncover deception have derived from studies suggesting that persons asked to lie while in the scanner show different patterns of brain activity than they do when being truthful (8). In general, lying appears to activate brain areas associated with high-level executive functions, which may be required for suppression of what would otherwise be a truthful response.

Almost all fMRI studies have looked only at differences in responses of groups of individuals, and data have been inconsistent with regard to which brain areas are activated by prevarication (1). Even if group norms of liars and truth-tellers differ in a particular study, unless specific brain regions are consistently associated with deception it will be difficult to apply these findings to the assessment of truthfulness among particular individuals. The first data from studies attempting to detect deception by individuals have only recently been published, and although the technique is promising, it remains to be seen how accurate it really is (9,10). Key questions that still must be answered, as with brain-wave analysis, involve the test's false-positive and false-negative rates, whether the findings generalize from the laboratory to real-world situations, and whether countermeasures taken by evaluees (such as manipulating attention or emotion) impair the accuracy of the findings. It also remains unclear how the tests perform in situations in which the status of the claim by the person being evaluated cannot be neatly categorized as true or false—as, for example, when a statement is partially true, when the evaluee is uncertain of the right answer, or when he or she considers the option of lying but ultimately decides to tell the truth.

How are the courts likely to respond to attempts to introduce brain-wave or neuroimaging data to assess witnesses' veracity? Given the courts' disappointing experiences with polygraph data, there is likely to be considerable judicial resistance. In addition, the law's traditional reliance on the jury as the ultimate arbiter of truth or falsehood will not be easy to surmount. As noted, brain fingerprinting has been introduced in only one case (in which it did not affect the outcome), and fMRI data have yet to be admitted into evidence for the assessment of truthfulness. The outcome of any particular effort to base testimony on these techniques will depend in part on whether judges can be persuaded that they meet the usual standard of admissibility of scientific evidence. For federal courts and many state courts that standard was laid out in the U.S. Supreme Court's opinion in Daubert v. Merrell Dow Pharmaceuticals (11). Daubert requires that scientific evidence meet standards of reliability (by which term the court includes validity) as indicated by such measures as proof of efficacy based on testable hypotheses, defined error rates, publication in peer-reviewed journals, and the endorsement of experts in the field. Although approaches based on brain waves and those based on fMRI data do not meet such standards at this time, it is conceivable that sufficient data will become available in the future to justify their admission into evidence under Daubert.

When that happens, two uses for such tests can be anticipated. Witnesses seeking to bolster their own credibility may try to introduce brain-wave or fMRI data in support of the validity of their testimony. That was how Terry Harrington used brain-fingerprinting testimony—in the only case in which such evidence was admitted—as he attempted to overturn his conviction for murder (6). The major issues for the court to decide in such cases, in addition to whether the testimony meets Daubert or similar standards, include the probative value of the evidence (that is, whether it is likely to help the judge or jury resolve the legal issue in question) and the extent to which introduction of the testimony will tend to confuse or bias the fact finder.

The technology will have other uses in the hands of an adverse party, such as the state in criminal cases. Prosecutors may demand that witnesses, including defendants, be screened for deception, or they may use the technique as part of the interrogation or discovery process. In response to this possibility, it has been argued that such a "search" of the brain should be governed by the Fourth Amendment's prohibition against unreasonable search and seizure, a traditional bulwark of individual privacy (12). But the law is ambiguous on the extent of personal privacy when it comes into conflict with governmental interests in public protection. Although probing the brain to catch a defendant in a lie would seem highly intrusive, it is not clear at this time whether legal rules would preclude it.

The normative issues at stake are not simple to reconcile. Especially in criminal prosecutions, the public at large has a strong interest in ascertaining the truth. To protect the integrity of the legal process from the consequences of false statements, testimony is given under solemn oath, there are elaborate evidentiary rules on "impeachment" of witnesses, and deliberate falsehoods may be prosecuted as perjury. But clearly the law accommodates conflicting interests, even when they frustrate the search for truth, as illustrated by such rules as the famous Miranda decision that exclude unfairly obtained evidence. Analogous considerations are at play here. If privacy is to have any meaning, it should protect individuals from having the very workings of their minds probed by the government without their consent. Other less intrusive means exist for approximating the truth than invading mental privacy, even of a criminal defendant. That said, once a technology has been shown to identify with a high degree of accuracy those who are lying, it will not be easy to resist calls for the admission of such testimony.

At this point, neither brain-wave analysis nor fMRI is supported by sufficient data regarding its accuracy in detecting deception to warrant use in court. Hence, the judiciary would be wise to resist the temptation to admit such data into evidence until more conclusive information about reliability and validity are available. Henry Greely (13), a legal scholar at Stanford Law School who has specialized in issues related to neuroscience and law, has suggested federal legislation that would prohibit the use of lie-detection devices until their safety and efficacy have been established in rigorous trials—much as the Food and Drug Administration now regulates pharmaceuticals. Ultimately, though, we will have to answer a different question as well: how fair is it to use techniques that explore the workings of the brain in intrusive and unprecedented ways?

Keckler CNW: Cross-Examining the Brain: A Legal Analysis of Neural Imaging for Credibility Impeachment. Paper 16, George Mason University School of Law Working Paper Series. Fairfax, Va, George Mason University School of Law, 2005. Available at http://law.bepress.com/gmul wps/gmule/art16
 
Frye v United States, 293 F 1013 (DC Cir 1923)
 
National Research Council: The Polygraph and Lie Detection. Washington, DC, National Academies Press, 2003
 
Farwell LA, Smith SS: Using brain MERMER testing to detect knowledge despite efforts to conceal. Journal of Forensic Sciences 46:135—143, 2001
 
Wolpe PR, Foster KR, Langleben DD: Emerging neurotechnologies for lie-detection: promises and perils. American Journal of Bioethics 5:39—49, 2005
 
Harrington v Iowa, 659 NW 2d 509 (Iowa 2003)
 
Illes J, Racine E: Imaging or imagining? A neuroethics challenge informed by genetics. American Journal of Bioethics 5:5—18, 2005
 
Spence SA, Hunter MD, Farrow TFD, et al: A cognitive neurobiological account of deception: evidence from functional neuroimaging. Philosophical Transactions of the Royal Society, Series B 359:1755—1762, 2004
 
Langleben DD, Loughead JW, Bilker WB, et al: Telling truth from lie in individual subjects with fast event-related fMRI. Human Brain Mapping 26:262—272, 2005
 
Kozel FA, Johnson KA, Mu Q, et al: Detecting deception using functional magnetic resonance imaging. Biological Psychiatry 58:605—613, 2005
 
Daubert v Merrell Dow Pharmaceuticals, 516 US 869 (1993)
 
Boire RG: Searching the brain: the Fourth Amendment implications of brain-based deception detection devices. American Journal of Bioethics 5:62—63, 2005
 
Greely HT: Premarket approval for lie detections: an idea whose time may be coming. American Journal of Bioethics 5:50—52, 2005
 
+

References

Keckler CNW: Cross-Examining the Brain: A Legal Analysis of Neural Imaging for Credibility Impeachment. Paper 16, George Mason University School of Law Working Paper Series. Fairfax, Va, George Mason University School of Law, 2005. Available at http://law.bepress.com/gmul wps/gmule/art16
 
Frye v United States, 293 F 1013 (DC Cir 1923)
 
National Research Council: The Polygraph and Lie Detection. Washington, DC, National Academies Press, 2003
 
Farwell LA, Smith SS: Using brain MERMER testing to detect knowledge despite efforts to conceal. Journal of Forensic Sciences 46:135—143, 2001
 
Wolpe PR, Foster KR, Langleben DD: Emerging neurotechnologies for lie-detection: promises and perils. American Journal of Bioethics 5:39—49, 2005
 
Harrington v Iowa, 659 NW 2d 509 (Iowa 2003)
 
Illes J, Racine E: Imaging or imagining? A neuroethics challenge informed by genetics. American Journal of Bioethics 5:5—18, 2005
 
Spence SA, Hunter MD, Farrow TFD, et al: A cognitive neurobiological account of deception: evidence from functional neuroimaging. Philosophical Transactions of the Royal Society, Series B 359:1755—1762, 2004
 
Langleben DD, Loughead JW, Bilker WB, et al: Telling truth from lie in individual subjects with fast event-related fMRI. Human Brain Mapping 26:262—272, 2005
 
Kozel FA, Johnson KA, Mu Q, et al: Detecting deception using functional magnetic resonance imaging. Biological Psychiatry 58:605—613, 2005
 
Daubert v Merrell Dow Pharmaceuticals, 516 US 869 (1993)
 
Boire RG: Searching the brain: the Fourth Amendment implications of brain-based deception detection devices. American Journal of Bioethics 5:62—63, 2005
 
Greely HT: Premarket approval for lie detections: an idea whose time may be coming. American Journal of Bioethics 5:50—52, 2005
 
+
+

CME Activity

There is currently no quiz available for this resource. Please click here to go to the CME page to find another.
Submit a Comments
Please read the other comments before you post yours. Contributors must reveal any conflict of interest.
Comments are moderated and will appear on the site at the discertion of APA editorial staff.

* = Required Field
(if multiple authors, separate names by comma)
Example: John Doe



Web of Science® Times Cited: 21

Related Content
Books
The American Psychiatric Publishing Textbook of Psychopharmacology, 4th Edition > Chapter 10.  >
The American Psychiatric Publishing Textbook of Psychopharmacology, 4th Edition > Chapter 46.  >
The American Psychiatric Publishing Textbook of Psychopharmacology, 4th Edition > Chapter 52.  >
The American Psychiatric Publishing Textbook of Psychopharmacology, 4th Edition > Chapter 52.  >
The American Psychiatric Publishing Textbook of Psychopharmacology, 4th Edition > Chapter 49.  >
Topic Collections
Psychiatric News
PubMed Articles