How to Make Science Trustworthy Again
With so many ways to get scientific studies wrong, and so much motivation to do so, it’s amazing that we still trust science as much as we do. The scientific method is indispensable, but the uncomfortable fact is that too many studies have been found to be biased or otherwise wrong in recent years.
What exactly is the problem, and what can we do about it?
Bias can be driven by either personal goals, such as certifying a firmly held belief, or by the need to publish studies with positive associations in journals. One important question is whether a study funded by industry is more biased than one funded by government. Of the $86 billion spent on basic research in the U.S. in 2015, industry footed the bill for about 25 percent and government 44 percent. At least one study indicates that the source of the funding is not actually the issue.
Political convictions are good predictors of how people feel about one funding source or another. A 2017 Pew Research Center survey found that Democrats are more likely to favor government-funded research (78 percent) while Republicans assume that private funding is sufficient (67 percent).
One thing that doesn’t appear to divide us is how serious the problem is.
Stanford University professor John Ioannidis said in 2005 that, “It can be proven that most claimed research findings are false.” Four years later the editor of 20 years of the New England Journal of Medicine — the highest impact journal in science — wrote: “It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines.” Similarly, in 2015, the Chief of The Lancet wrote, “Much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with … an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”
What is going wrong with all these flawed studies?
Most scientific research starts with models, which are just simplified ways to look at the world that allow us to make predictions. But, as British statistician George Box once put it, “all models are wrong, some are useful.” If your model, and the assumptions that you make to fill in the missing data are wrong, you generate the wrong predictions. Nobel physicist Hermann J. Muller promoted a model in his 1946 Nobel Prize Lecture saying that radiation was dangerous at even very low levels, even though he had convincing evidence that that model was wrong.
With a model in hand, the researcher uses data to test it. But like the old saying goes, “garbage in, garbage out.” If your data are wrong, your results will be wrong. Professor Ed Archer has shown that about four out of every five papers on nutrition use data that is completely wrong. He demonstrated that most people, when asked what they’ve eaten in the last 24 hours, don’t report eating enough to stay alive.
Even if you get the model right and your data are good, you still need good statistical analysis to interpret the results. The trouble is some scientists (and their graduate students) are just not very good at it. A scientist who stalks scientific papers for poor statistical practices talks of a neurosurgeon who asks a statistician for a good instructional book on statistics, because he prefers to do that part himself. The statistician replies, “I'm so glad you called! I've always wanted to do brain surgery; can you suggest a good text on that?”
Ignorance of good methods is one thing — and something that can be fixed relatively easily — but the pesky problem of bias is harder to root out.
A recent study examined 5,675 clinical nutrition, food safety, dietary patterns, and dietary supplement scientific papers for “risk of bias.” It came to a surprising conclusion: Industry funding “is not consistently associated with producing research results that are considered ‘biased’ using the standard ROB (risk of bias) criteria” as compared to government-funded research.
But if funding is not the root problem, how do we address whatever scholarly biases do exist, whether driven by dogma or pragmatism? First, as the aforementioned study notes, bias is reduced if there is funding from multiple sources (e.g., government and industry). And generally speaking, the more transparent the work, the better (or at least the easier to refute). A good next step would be to echo the 5,000 journals that have now agreed to standards of openness, transparency, and reproducibility for the work they publish.
Given that science plays such a large role in government decisions, and that only one-third of Americans trust our government to “do what is right,” more of these kinds of measures are urgently needed.
Richard Williams is a former director for social sciences at the Center for Food Safety and Applied Nutrition in the Food and Drug Administration.