Nature 441, 918-919 (22 June 2006) | doi:10.1038/441918a
Bioethicists and civil-rights activists are calling into question plans by two US companies to single out liars by sliding them into a brain scanner and searching their brains for give-away patterns of deception.
The two firms say that they will give the accused a chance to prove their innocence using a technique more accurate than the discredited polygraph. No Lie MRI will start offering services out of Philadelphia this summer. Those behind the second company, Cephos, based in Pepperell, Massachusetts, say they hope to launch their technology later this year. Likely clients include people facing criminal proceedings and US federal government agencies, some of which already use polygraphs for security screening.
Critics say that the science underlying the companies' technique is shaky and that the premature commercialization of the method raises ethical concerns about its eventual use in interrogation. This week, the American Civil Liberties Union (ACLU) entered the debate by organizing a 20 June briefing on the issues for scientists, the public, the press and policy-makers in Washington DC.
The field of lie detection is littered with dubious devices. The polygraph relies on the idea that lying is stressful, and so measures changes in heart rate, breathing and blood pressure. But because it can be duped by countermeasures and there is little hard evidence that it actually works, it is rarely admitted as evidence in court.
Rather than relying on indirect measures of anxiety, assessing brain activity using functional magnetic resonance imaging (fMRI) goes to the very source of the lie. In one of the earliest studies, a team led by Daniel Langleben of the University of Pennsylvania, Philadelphia, and his colleagues offered students sealed envelopes containing a playing card and $20. The students were told they could keep the money if they could conceal which card they held when questioned in an MRI machine (D. D. Langleben et al. NeuroImage 15, 727–732; 2002).
These and other studies revealed that particular spots in the brain's prefrontal cortex become more active when a person is lying. Some of these areas are thought to be involved in detecting errors and inhibiting responses, backing the idea that suppressing the truth involves additional areas of the brain to telling it.
The early studies showed that it was possible to make out subtle changes in brain activity caused by deception using pooled data from a group of subjects. But to make a useful lie detector, researchers must be able to tell whether an individual is lying; when only one person is assessed it is much harder to tease out a signal from background noise. Langleben, who advises No Lie MRI, says he is now able to tell with 88% certainty whether individuals are lying (see Nature 437, 457; 2005). A group working with Cephos, led by Andrew Kozel, now at the University of Texas Southwestern Medical Center in Dallas, makes a similar claim.
Kozel and his colleagues asked 30 subjects to take either a watch or a ring, hide it in a locker and then fib about what they had hidden when they were questioned inside a scanner. Using the results of this study, the team devised a computer model that focuses on three regions of the brain and calculates whether the shift in brain activity indicates lying. When the model was tested on a second batch of 31 people, the team reported that it could pick up lies in 90% of cases (F. A. Kozel et al. Biol. Psychiatry 58, 605–613; 2005)
But critics of the technology urge restraint. "Until we sort out the scientific, technological and ethical issues, we need to proceed with extreme caution," says Judy Illes of the Stanford Center for Biomedical Ethics, California.
One problem is that there is no standard way to define what deception is or how to test it. Scientists also say that some of the statistical analyses used in the fMRI studies are questionable or give results that are perilously close to the thresholds of significance. "On individual scans it's really very difficult to judge who's lying and who's telling the truth," says Sean Spence of the University of Sheffield, UK, who was one of the first to publish on the use of MRI in the study of deception. "The studies might not stand up to scrutiny over the long term."
Another concern raised by scientists and bioethicists is that the contrived testing protocols used in the laboratory — in which subjects are told to lie — cannot necessarily be extrapolated to a real-life scenario in which imprisonment or even a death sentence could be at stake. They say there are no data about whether the technique could be beaten by countermeasures, and that data collected from healthy subjects reveal little about the mindset of someone who genuinely believes they are telling the truth or someone who is confused, delusional or a pathological liar.
"If I'm a jihadist who thinks that Americans are infidels I'll have a whole different state of mind," says Gregg Bloche, an expert in biomedical ethics and law at Georgetown University Law Center, Washington DC, and a member of the ACLU panel. "We don't know how those guys' brains are firing."
Because of these concerns, legal experts say that the technology is unlikely to pass the standards of scientific accuracy and acceptance required for it to be admissible in a US court. But even if it is not sufficiently accurate and reliable today, it may well be tomorrow, as more and more people are tested and techniques refined. That raises a second set of concerns that revolve around who should be allowed to use the technique and under what circumstances.
Bioethicists worry that fMRI lie detection could quickly pass from absolving the innocent to extracting information from the guilty — in police questioning, immigration control, insurance claims, employment screening and family disputes. Their concerns are fuelled by other emerging lie-detection technologies, such as those that measure the brain's electrical activity (see Nature 428, 692–694; 2004).
Particularly in the aftermath of 11 September 2001, they worry that fMRI and other devices might be misused in the hands of the military or intelligence agencies. "There's enormous pressure coming from the government for this," says bioethicist Paul Root Wolpe at the University of Pennsylvania. "There is reason to believe a lot of money and effort is going into creating these technologies."
On top of this, ethicists say there is something deeply intrusive about peering into someone's brain in search of the truth; some even liken it to mind-reading. In future, they say, a suspect might be betrayed by their prefrontal cortex before they even open their mouth — if, for example, the brain recognizes a particular photo or foreign word. "This is the first time that we have ever been able to get information directly from the brain. People find the idea extraordinarily frightening," Wolpe says...
Just another Reality-based bubble in the foam of the multiverse.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment