There’s a fascinating article in the Nov. 2010 issue of The Atlantic by David H. Freedman that examines the world of medical research and that suggests much of our empirical, research-based knowledge may be flawed.
Anyone who reads World of Psychology regularly already knows about the problems in a lot of industry-funded studies. But this article suggests that the problems with peer-reviewed research go far deeper than simple for-profit bias. Scientists are biased in many, many ways (not just for monetary gain). And this bias inevitably shows up in the work they perform — scientific research.
This is not a new drum to beat for me — I’ve talked about researcher bias in 2007 and how researchers design studies to find specific results (this example involved researchers who found suicidal method websites when searching for — wait for it — “suicide methods” in Google). We’ve noted how virtually every study in journals such as Psychological Science rely almost exclusively on college students collected from a single campus as subjects — a significant limitation rarely mentioned in the studies themselves.
However, here’s the real troubling aspect — these kinds of biased studies appear in all sorts of journals. JAMA, NEJM and the BMJ are not immune from publishing crappy, flawed studies in medicine and psychology. We think of “respectability” of a journal as some sort of sign of a gatekeeping role — that studies appearing in the most prestigious journals must be fundamentally sound.
But that’s simply not true. The emperor is not only naked — his subjects have hidden his clothes in order to further their own careers.
The issue of biased studies being published first hit the spotlight back in 2004, when GlaxoSmithKline was sued by state attorney generals for hiding research data on Paxil. Since that time, dozens of studies have come to light and other studies have since been published showing how pharmaceutical companies appear to have regularly hid relevant research data. This data usually shows that the drug being studied was not effective, when compared to a sugar pill, in treating whatever disorder it was intended for. (Blogs like Clinical Psychology and Psychiatry: A Closer Look and the Carlat Psychiatry Blog have more details about these studies.)
But what about other kinds of bias? Are we only interested in studies where the bias is so overt, or shouldn’t we be concerned about any kind of bias that may impact the reliability of the results?
The answer is, of course, we should be interested in all forms of bias. Anything that can influence the end results of a study mean that the study’s conclusions may be in question.
John Ioannidis, a professor at the University of Ioannina, became interested in this question in medical research. So he put together an expert team of researchers and statisticians to dig deeper and see how bad the problem was. What he found didn’t surprise researchers, but will come as a surprise to most laypeople —
Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals. […]
“The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results — and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously.
“At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”
Ioannadis put together a complex mathematical model that would predict how much research may be flawed, based upon all of these variables. His model predicted that “80 percent of non-randomized studies (by far the most common type [– especially in psychological research]) [will] turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials.”
Then he put that model to the test on 49 studies that held the most highly regarded research findings in medical research in the past 13 years. These were in the most cited medical journals, and were themselves the most cited articles.
Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. […]
Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested. Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed — in one case for at least 12 years after the results were discredited.
So much for scientists re-testing others’ published results — the only definitive way of combating bias. How could nearly 25 percent of the most cited medical studies in the past 13 years never been re-tested? Astounding.
The upshot for most of us is that it is nearly impossible for anyone to tell if a given piece of research is really demonstrating positive, novel and robust results. Without going into a detailed analysis of the study — and all of the study’s precursors — putting new research into context is a lengthy and difficult task. Furthermore, Ioannidis’ model suggests that the vast majority of psychological research — perhaps as much as 80 percent of it — is likely wrong. There’s really no silver lining here.
Except maybe that research teams such as Ioannidis’ are on the job, working to show how bias can infiltrate even “gold standard” research.
Change here depends largely on individual researchers learning about these biases and working much harder to ensure their studies are devoid of them. But with no gatekeeper actually checking on a study’s design or statistical methods, there’s little incentive to change. The financial and career incentives for researchers to continue to publish as they traditionally have in order to maintain their academic and professional standing remains intractably strong.
The full article is worth your time: Lies, Damned Lies, and Medical Science
While not ideal, here’s our primer from a few years ago about how to spot questionable research — Is the Research Any Good?
3 comments
Having worked in the world of medical research for over 40 years, I was impressed by this intuitive article. I personally have witnessed how often researchers flawed on the side of their benefactors, knowingly or unknowingly. Or erred to meet their Publish or Perish expectations. Frankly, it is difficult to accept results of any study, or to base actions on any research. Correlational studies are particularly suspect. Thank you for helping illuminate readers.
I have difficulty believing anyone with a comb over like that.
Perhaps I’m TOO skeptical. . . .
How can Professor John Ioannidis be sure his study wasn’t biased? He was “looking for the specific ways in which studies were going wrong.” He and his researchers were looking for biases they could categorize. Could that have made them find biases where there were none?
Comments are closed.