I am certain that some will react negatively to the title of this post, feeling that it is needlessly provacative. My response is that I really have tried to moderate it. One important inspiration for this post is an article by John Ioannadies, of Stanford, which he titled “Why Most Published Research Findings Are False.” (PLos Medicine 2(8):e124) I’m discussing three of the sources of the problem: publication bias, p-hacking, and researcher bias.
While there are some notable and important exceptions, journals tend to publish articles reporting positive results. Researchers obtaining negative results will most likely forego attempts at publication and move on to the next research question. Consider a possible research question relating to walkability: Does the smoothness of concrete sidewalk surfaces affect the amount people walk in a residential neighborhood? If a researcher studies this and finds no effect, I seriously doubt whether any journal would publish the findings. On the other hand, the finding of a statistically significant relationship might well lead to publication.
The importance of statistical significance for publication—typically a p-value less than 0.05—raises a related problem. Researchers examine alternative measures and even hypotheses until they find a statistically significant relationship, a procedure sometimes referred to as “p-hacking.” Continuing with the sidewalk smoothness example, the amount people walk in a neighborhood might be measured in a variety of ways, as could the smoothness of sidewalk surfaces. Or if none of these work, does the sidewalk surface affect accidents to walkers? Or perhaps sidewalk width or distance from the street affects amount of use or accidents. Examine enough combinations and the likelihood of finding a statistically significant relationship that occurred purely by chance becomes very high.
The last point I am making is the effect of researcher bias. I’m not just referring cases where a researcher deliberately makes choices in the research to prove a hypothesis consistent with his or her beliefs (though I am certainly aware of instances where exactly that has happened). Rather, bias can and does affect researcher behavior in instances in which the researcher is striving to be fair and impartial. This has been shown over and over to occur in many fields, even with activities such as reading and recording quantitative values from a measuring device. That is why the best design for clinical trials is a double-blind approach where neither the patients nor the physicians evaluating the patients know who is getting the treatment and who is getting the placebo. With research on the effects of sprawl, many of the researchers have strong, publicly declared beliefs in the undesirability of urban sprawl. So bias is a part of the research enterprise.