In recent years neuroscience-based arguments have been cropping up in courtrooms with increasing frequency. Litigators have attempted to use brain scans to demonstrate reduced capacity, and more recently to prove the veracity of a client’s statements. Brain-based lie detection has been studied scientifically for about 10 years, and for nearly as long both neuroscientists and legal scholars have warned about the dangers of misusing this technology.
Brain-based lie detection uses functional MRI (fMRI) to examine correlates of brain activity while an individual responds to questions. Science shows that for most individuals there are distinct and reliably different constellations of brain regions active during truth telling compared to lie telling. Currently, accuracy levels for brain-based lie detection have been reported to fall between about 70% and 90%.
While 70-90% accuracy is well above chance, it remains shy of an acceptable evidentiary standard. For this reason, all attempts to date to admit brain-based lie detection in United States courtrooms have been denied. In the August 2012 case of State of Maryland v. Gary Smith, Montgomery County Circuit Court Judge Eric M. Johnson stated the following:
Upon examination of the … available legal and scientific commentaries … it is clear to the Court that the use of fMRI to detect deception and verify truth in an individual’s brain has not achieved general acceptance in the scientific community. Therefore, it does not pass the requisite standard for evidence … and must necessarily be denied admittance in this Court.
Just one month later, in September of 2012, the U.S. Court of Appeals for the Sixth Circuit affirmed the inadmissibility of fMRI-based lie detection in U.S. v. Semrau. The unanimous opinion cited both inadequate reliability and potential for bias as reasons to exclude brain-based lie detection from evidence. The developing standard seems to indicate that if the accuracy of fMRI lie detection techniques were to increase sufficiently, admissibility could then be reconsidered.
Obstacles to improving brain based lie-detection are numerous: Differences between individuals and within individuals over time, the indirect nature of the fMRI technique, and the finicky nature of MRI machines generally all make reliability elusive. Nonetheless, improved accuracy is very likely achievable. Heightened resolution in brain imaging techniques, larger databases from which to draw inferences, and the development of new machine learning techniques could all contribute to better reliability. Still, scholars have expressed concern about whether reliability could ever be elevated to an acceptably high level.
One concern surrounds the brains of psychopaths. Psychopaths are statistically more likely to engage in criminal behavior and more likely than non-psychopaths to lie. Studies have shown that psychopathic traits are associated with differential patterns of brain activity during acts of deception. Therefore, even if fMRI-based lie detection were made to be 99.9% accurate for detecting deception in normal adults, those criminals most likely to lie may be least likely to appear to be lying when assessed using these brain-based techniques.
Outside of the issue of reliability is another dimension of concern: Mental privacy. The idea of asking a person a question, listening to his response, and then peering into his brain to determine if he is telling the truth may implicate fundamental concerns about privacy. To date, all attempts to use brain-based lie detection in the courtroom have been put forth by a defense attorney attempting to present exonerating evidence. However, if the defense were able to use such evidence, should not the prosecution be similarly entitled? Could brain scans attempting to weed out deceptive responses be ordered like a DNA test?
The Fourth Amendment guarantees security of one’s person and property against unreasonable search and seizure. On the surface it may seem that “mind reading” constitutes an unreasonable bodily search. However, the fact that DNA testing can be ordered by a court even in non-criminal proceedings (such as mandated paternity testing) indicates that precedent may not protect against invasive use of neuroscience technology. A more tenable constitutional defense may relate to the Fifth Amendment, which, at least in the criminal arena, prevents an indicted person from having to testify against himself. Brain-based lie detection and other methods of mental inspection yet to be developed could certainly be framed as unacceptable self-incriminating testimony. However, existing accuracy-focused exclusions of fMRI-based lie detection would not prevent admission of more accurate (and potentially more invasive) brain-based evidence. Due consideration should be granted not just to the ability of fMRI technology to assess brain states, but also of the legal and moral implications of doing so.