Frank L. Schmidt, a respected professor and researcher at the University of Iowa, gave a talk at the Association for Psychological Science’s 20th convention on Saturday about how scientific data can lie. Yes, that’s right, empirical data — even that published in respected, peer-reviewed journals — regularly do not tell the truth.
Schmidt’s talk was well-attended in one of the largest ballrooms at the Sheraton Hotel and Towers in Chicago where the convention is being held. Although an uneven presentation, Schmidt’s main points came across.
One of which is that the naive interpretation of multiple datasets is often likely to be the most correct — Occam’s razor (“the simplest solution is usually the best answer”). Schmidt claims that good research finds the simple structure underlying complex data.
He summarized there are two main reasons why data can “lie” in research — sampling errors and measurement errors.
Schmidt’s biggest criticism was directed at psychological science’s fetish with significance testing — e.g., statistical significance. He wishes that psychology would move far away from its reliance and fascination with statistical significance, because it is a weak, biased measure that basically says little about the underlying data or hypothesis.
Schmidt described six myths of surrounding significance testing. One myth was that a good p value is an indicator of significance, when it’s really just an indication of a study’s power level. Another was that if no significance was found that means there was no relationship found between the variables (in truth, it may mean simply that the study lacked sufficient power).
Schmidt’s solutions are simple — report effect sizes (point estimates) and confidence intervals instead, and de-emphasize significance testing altogether.
He ended lambasting the new-found emphasis on meta-analyses in psychological research, specifically calling out the journal Psychological Bulletin. In a yet-to-be-published study, he and other researchers examined all the meta-analyses published in the Psychological Bulletin from 1978-2006 — 199 studies altogether.
The researchers found that 65% of these studies examined used a “fixed effects” model for their meta-analysis. Schmidt claimed that in fixed effects models data relationships are underestimated (by as much as 50%) and that researchers are over-estimating how precise they are (how little error there is in that estimate). Instead, Schmidt prefers “random effects” models that better account for these variations.
He also noted that in 90% of the studies examined, no corrections were made for measurement error — one of the major reasons he cites that data can “lie” in psychological research.
Given this analysis, Schmidt is suggesting that a great many meta-analyses published in peer-reviewed journals reach incorrect or faulty conclusions.
Sadly, this state of affairs is unlikely to change any time soon. While many psychological journals have adopted stricter standards for publication of research that better adhere to Schmidt’s suggestions, many still do not and seem to have no intention of changing.
What this means to the average person is that you cannot trust every study published just because it appears in a peer-reviewed journal, which is then publicized across the media as “fact” through a press release. Such facts are malleable, changing, and faulty. Only through careful reading and analysis of such studies can we understand the value of the data they present.
4 comments
Hi friend,
Iam susan. Are you an alcoholic man?. if ur answer is yes, please you can change ur habit. it is a poisonous . it can spoil your health and life. so please avoid alcoholic drugs. Take Care. As a friend one more thing i say, please avoid alcoholic drugs.
===========
susan
Addiction treatment and recovery resources for the addict and their families.
http://www.addictiontreatment.net
Hai, I julie. I read this topic. I understand many useful news. alcohol drugs are collapsed to man’s life. Please release to me.
===========
julie
===========
Addiction treatment and recovery resources for the addict and their families.
http://www.addictiontreatment.net
Here here…I agree. Don’t believe EVERYTHING you read. Especially in the world of psychology and psychiatry.
Yes it is important to allow the data to speak for itself.
However, different outcomes can be achieved when different methodologies are employed. For instance I was just reading a few lesioning studies on the rat medial prefrontal cortex. One study was rather inexact in its lesioning, i.e. extending lesioning from its dorsal area (dACC) to its ventral region (e.g. prelimbic and infralimbic regions. The other study was more exact in its lesioning technique, i.e. limiting lesioning to the ventral mPFC region, e.g. prelimbic and infralimbic regions. As predicted both studies had conflicting findings when studying the behavioral and neuroendocrine effects during fear conditioning and extinction.
So before one can let the data speak for itself, one needs to be well versed on the methodology supporting the given findings.