Most scientific papers are probably wrong
PostPosted: Fri Nov 04, 2005 2:17 pm
Found this interesting, especially now that I'm taking a closer look at statistics....
*******************************************
Most scientific papers are probably wrong - 30 Aug 05, Kurt Kleiner @ NewScientist.com
http://www.newscientist.com/article.ns?id=dn7915
Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.
John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.
"We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.
In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong.
Massaged conclusions
Traditionally a study is said to be "statistically significant" if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through - such as whether a particular gene influences a particular disease - it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.
Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients), or studies where the protocol and endpoints are poorly defined, allowing researchers to massage their conclusions after the fact.
Surprisingly, Ioannidis says another predictor of false findings is if a field is "hot", with many teams feeling pressure to beat the others to statistically significant findings.
But Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and a neuroscientist at Johns Hopkins Medical School in Baltimore, US, says most working scientists understand the limitations of published research.
"When I read the literature, I'm not reading it to find proof like a textbook. I'm reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that's something to think about," he says.
Journal reference: Public Library of Science Medicine (DOI: 10.1371/journal.pmed.0020124)
*******************************************
Most scientific papers are probably wrong - 30 Aug 05, Kurt Kleiner @ NewScientist.com
http://www.newscientist.com/article.ns?id=dn7915
Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.
John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.
"We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.
In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong.
Massaged conclusions
Traditionally a study is said to be "statistically significant" if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through - such as whether a particular gene influences a particular disease - it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.
Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients), or studies where the protocol and endpoints are poorly defined, allowing researchers to massage their conclusions after the fact.
Surprisingly, Ioannidis says another predictor of false findings is if a field is "hot", with many teams feeling pressure to beat the others to statistically significant findings.
But Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and a neuroscientist at Johns Hopkins Medical School in Baltimore, US, says most working scientists understand the limitations of published research.
"When I read the literature, I'm not reading it to find proof like a textbook. I'm reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that's something to think about," he says.
Journal reference: Public Library of Science Medicine (DOI: 10.1371/journal.pmed.0020124)