Meta-analyses are great research tools, because they allow researchers to look at data across large sets of data published by multiple studies, and see if there are more powerful (or less powerful) effects that no single study has found on its own.
So it’s always interesting to read something that a meta-analysis finds in the data that individual studies didn’t quite find.
Today, British researchers discovered, unsurprisingly, that Antidepressant Data Showed Not as Effective as Thought. I say unsurprisingly, because the researchers made a series of decisions that pretty much guaranteed their end-result.
First, they went to the original datasets and included unpublished data too. Unpublished data is usually unpublished for a reason — for instance, the study was either poorly designed (not taking into account some variable that made the conclusions useless), or maybe it had insignificant findings (e.g., placebo worked just as well as Drug A). If you include all those studies that found insignificant results, averages say that’s going to bring down the efficacy of any drug being examined. There is no drug on the market today that doesn’t have a study (likely unpublished) that showed the drug had no significant effect on whatever it was being studied.
Second, the researchers looked at data in a single slice of time (1987 – 1999). While their findings are true for that time period, in the intervening 19 years, many additional studies on the effectiveness of the seven SSRI antidepressants (only four of which made it into this study) have been published. Does that mean the researchers’ findings are invalid? No, it just means that the FDA trial data — the dataset that should be the strongest and make the most compelling argument for a drug’s approval by the FDA — was pretty darned weak when pooled and looked at together. It would be interesting if the researchers could do a similar analyses of the 19 years worth of data now acquired and see if they found similar results (an impossibility, by the way, because nearly all drug companies still don’t release unpublished data on their drugs).
Third, researchers love to argue details and specifics. Is a 1.8 point change on the Hamilton depression scale clinically significant, or do you need a 3 point change? Well, the British organization, the National Institute for Clinical Excellence (NICE) published a clinical guideline in 2004 that says you need that 3 point difference, and since those folks are far smarter than I, I agree with them. But of course the U.S.-based FDA doesn’t use British guidelines for determining clinical efficacy (although it may consult with such guidelines) and ultimately, drug approval.
Patients taking a placebo, or sugar pill, had nearly an 8 point improvement on their Hamilton depression scale, a clinician-based rating of a patient’s depression. People taking one of the four studied antidepressants had nearly a 10 point improvement on the same scale. So while people taking the antidepressant felt better than their sugar-pill counterparts, it wasn’t likely a change one could feel or that others would notice.
The upshot of this research is to show how very weak these four antidepressants’ data were, and that the FDA actually approved these drugs despite this weakness. Perhaps the weakness could not be seen individually, in each study’s data, and if that’s the case, the FDA should now be conducting their own internal meta-analyses on a single drug (or class of drugs) every year, to ensure their decisions remain valid in a more objective and empirical light.
Other Coverage:
- Antidepressants: Meet the New News, Same as the Old News from CL Psych
- British Researcher Gives Thumbs Down To Anti-Depressants from Furious Seasons
10 comments
You make sound points about what gets included in the FDA decision-making process. But one of the things that interested me most about the PLoS paper was the insights that it gave into how the inclusion/exclusion criteria for some of the trials were used to stack the results in favour of the drug manufacturers.
Yes, indeed that is an interesting point. But I think that’s a strange decision for the original researchers to have made, given that the response time in most patients for an SSRI antidepressant averages from 2 to 4 weeks. Meaning that the researchers weren’t even giving patients’ bodies time to feel the therapeutic effects of medications in these 6 studies where this was done.
A more interesting question, too, might be why does it take us nearly 20 years to learn about these things? This data has been available, so why has it taken so long to get at these findings?
Most importantly, though, I don’t think this study is the “final word” on any particular SSRI’s effectiveness in treating depression. Antidepressants are effective (especially when they’re prescribed as a part of a comprehensive treatment plan), it’s just that their efficacy may not be as great as we were all led to believe…
What are the most relevant clinical trials that have been published since 1999 that you think should be considered when weighing this study?
Give me a few days to look up a few that would be of the greatest interest.
Upon further reflection, and seeing how most media outlets are spinning this story (exactly as the authors wrote), I find this the most troubling sentence in the study:
This simply is not true as the statement stands. The authors’ study did not look at *all* antidepressant medication, nor even all SSRIs. It also only looked at data at one point in time, the most recent study nearly 19 years ago.
It’s no wonder the media are taking the study at face value, based upon the authors’ own statements and publishing stories like, “Anti-depression drugs don’t work,” and “Depression drugs no better than placebo.” That’s not what this study actually showed, no matter what the authors say they showed.
This kind of fear-mongering and black-and-white, limited-attention span writing is what is so infuriating to watch. Out of more than 2 dozen articles I’ve read talking about this study, I could find not a single one that actually talked about the study’s limitations. Instead, it’s just all talking-heads — the authors on one side, the drug reps on the other. What about the “truth?”
i think the reason this sort of data has not come out before is that it is in no one’s interest to do so. Certainly not in the interests of Big Pharma or the patients who take these drugs successfully. I can imagine a scenario however where articles like this one could be misused by health insurance companies to deny services.
Seems natural that the drug companies don’t publish the negative results. Also it seems quite clear that even the studies with positive results are not indicating large positive effects from these medications. Nobody’s going from say a 2 to an 8 on a 10 point scale of how good they feel. They are generally showing something like the difference between a 2 and a 6, where a placebo gives you from 2 to 5. So not much. Same kind of results are true for talking therapies – and this despite the fact that many of the studies involve clinicians essentially rating themselves on how much they think they helped their clients.
I say that in the end it is each individual consumer of therapy and/or medication that must be the judge of what is working for that individual. If it’s not working, they need to try somebody/something else until hopefully they can find something that works.
It is clear from the data that relying on the quantitative studies isn’t going tell any individual what is going to work in their particular situation.