"We urge funding agencies to offer supplemental grants to encourage researchers to incorporate and test blinding methods in their funded research projects," says Robert MacCoun. (Credit: Barn Images/Flickr)

bias

How going ‘blind’ could cut bias in research

A strategy to fight bias in empirical research involves researchers not seeing their true results until the work is nearly completed.

In blind analysis, data are modified so that the scientists don’t actually know what their results are until they’ve mostly completed their data analysis. The computer “knows” the actual results, but displays them with random noise, systematic bias, or scrambled labels in a way that enables the investigator to make analytic decisions without knowing whether they will help or thwart a particular hypothesis.

Robert MacCoun, professor of law at Stanford University, writes in an essay in Nature that more widespread use of this “blind analysis” could decrease bias.

University writer Terry Nagel recently interviewed MacCoun about this issue:

Why is blind analysis considered essential in some scientific fields?

It is currently considered essential in some areas of particle physics and in cosmology research, not because those fields are particularly susceptible or suitable, but because scientists in those fields happened to notice the generic problem of confirmation bias and decided to do something about it.

“We predict that researchers who choose blind analysis will find that their work is seen as more credible”

When I mention blind analysis to other researchers, many nod as if they are familiar with it, but they usually think I’m talking about double-blind methods that keep the participants and experimenters in the dark about which participants are getting a placebo vs. the actual treatment.

Blind analysis is similar in spirit, but quite different in practice. In blind analysis, researchers analyzing the data can’t see the true results until they have completed the analysis. After lifting the blind, they can do more analysis, but they have to report those as “post-blind”–i.e., less credible–analyses.

Why does it make sense to apply blind analysis to the biological, psychological, and social sciences?

There is increasing evidence that a large fraction of the published results in many fields, including medicine, don’t hold up under attempts at replication, and that the proportion of “statistically significant” published results is “too good to be true,” given existing sample sizes. What’s causing this? Many factors, but much of it has to do with confirmation biases that stack the deck in favor of a preferred hypothesis.

Blind analysis is particularly valuable for highly politicized research topics and for empirical questions that emerge in litigation. For example, forensic laboratories are beginning to adopt simple blind analysis methods. And in some cases, expert witnesses could apply their preferred analytic methods to blinded data, which would greatly increase the credibility of their conclusions.

What are some common objections to using blind analysis?

Some years ago, medical researchers argued that the problem of bias wasn’t that serious. We now know better. They also claimed that in clinical trials, blind analysis might endanger patients because the research team might fail to intervene when there were problems. But this can be handled by having a monitoring team that isn’t blinded, while the team doing the scientific data analysis remains in the dark. Or the researchers can simply decide not to use blind analysis.

In our essay, we also address two other possible objections: that it is too hard or that it will hinder discovery.

How can these objections be overcome?

Blind analysis should be subjected to scientific testing like any other “treatment,” so that costs and benefits of new approaches are measured. But we aren’t calling for mandated blind analysis, by any means. We predict that researchers who choose blind analysis will find that their work is seen as more credible. Researchers who don’t want to use blind analysis don’t have to do so, but ideally they’ll explicitly state that decision in their reports. [Coauthor and UC Berkeley physics professor] Saul Perlmutter’s researchers have found that blind analysis actually increases their creativity, as well as the fun and drama of doing research.

What do you propose so that blind analysis is more widely used by the scientific community?

We urge funding agencies to offer supplemental grants to encourage researchers to incorporate and test blinding methods in their funded research projects. And we hope statistical software vendors will consider incorporating blinding algorithms in their data analysis packages.

Source: Stanford University

Related Articles