Impartial experts not so impartial
U. VIRGINIA (US) — Researchers find that many “impartial” expert witnesses lose sight of objectivity and tend to come to conclusions that align with those who pay for their services.
Forensic psychologists and psychiatrists are ethically bound to be impartial, to look only at the evidence before them, when performing evaluations or providing expert opinions in court. But new research suggests that the paycheck some courtroom experts receive influences their evaluations.
“It’s disappointing, because we’re not seeing clinicians demonstrate the objectivity we’re all aspiring to,” Daniel Murrie says of the study findings. “But it’s also alarming because bias has such implications for justice. We really do want our justice system to have objective, reliable data, but at times the adversarial process that’s supposed to bring us closer to the truth is actually distorting the truth.” (Credit: mattappleby/Flickr)
In a real-world experiment, experts who believed they were working for prosecutors tended to conclude that sexually violent offenders were at greater risk of re-offending than did experts who thought they were working for the defense, the researchers found.
“The findings were fairly alarming,” researcher Daniel Murrie says. The findings will be published in the journal Psychological Science. “We suspected to find some of what we call the ‘allegiance effect’—some difference between the sides—but the difference was more than we were expecting.”
Murrie, director of psychology at University of Virginia’s Institute of Law, Psychiatry and Public Policy and an associate professor of psychiatry and neurobehavioral sciences at the School of Medicine, has long been interested in the impartiality of forensic psychiatrists and psychologists. He both conducts forensic psychological evaluations and trains others to do so.
“For years I would have says, ‘We really are unbiased, because we’re required to be,'” he says. “But over the years I realized we really didn’t have any genuine data on this important question, and that the field just took it on faith that evaluators could do their work objectively.”
So he and his colleagues set out to find some hard numbers. After several observational studies in the field—where it is impossible to rule out other explanations for “adversarial allegiance”—they designed the most rigorous experiment they could.
The researchers recruited experienced forensic psychiatrists and psychologists from several states by offering a free continuing education workshop on the psychological tests used to evaluate sexually violent predators. A total of 118 attended, receiving real training over two days. In exchange, they agreed to provide paid consultation to a state agency that was reviewing a large number of sexually violent offender files. Or so they were told.
When the participants returned weeks later to provide paid consultation on what they thought was a large cohort of offender files, researchers actually gave all the participants the same four files to review. When participants met with a real lawyer, half were led to believe they were working with the defense, while the other half thought they were working with the prosecution.
The results were significantly different—and broke down along employment lines. The experts scored the offenders on two scales widely used in court proceedings. The experts who thought they were working for the prosecution tended to assign higher risk scores; the defense experts tended to assign lower risk scores.
The differences were striking because in other contexts, different experts usually provide very similar scores on these tests, and scores rarely differ by more than a few points. But in this study, the scores often differed more than the typical error rate, and they differed systematically depending on the side.
Not every expert demonstrated biased scoring, of course. But analyses suggested that most opposing pairs of evaluators had score differences greater than could be attributable to chance alone.
Is objectivity obtainable?
“It’s disappointing, because we’re not seeing clinicians demonstrate the objectivity we’re all aspiring to,” Murrie says of the study findings. “But it’s also alarming because bias has such implications for justice. We really do want our justice system to have objective, reliable data, but at times the adversarial process that’s supposed to bring us closer to the truth is actually distorting the truth.”
Their findings were similar to recent evidence of bias in other forensic sciences, such as DNA and fingerprint analysis. Murrie hopes that the study will prompt experts in his field to take a hard look at how evaluators are trained and how they practice.
“Most people in this line of work really do try to be objective and pride themselves on being objective,” Murrie says. “But our hope is that these results prompt us to better consider how these adversarial arrangements can affect our work, and to build some better safeguards.”
Source: University of Virginia