No doubt partly prompted by a recent spate of academic fraud, Science magazine published results from a study that set out to replicate 100 published psychology studies. An "international team of experts," however, could "reproduce only 36% of original findings." (News coverage here.) While I'm sure many in the psychology field will seek to explain such findings, an inability to replicate 74% of studies published in "top" psychology journals is, at the very least, jarring.
Update: From the 8.27.2015 NYT (here).
Update 2: In a recent editorial in Psychology Science (here), D. Stephen Lindsay responds to a replication effort, published in Science (described above), that signaled serious problems. As Gelman (Columbia--Statistics) notes, "Lindsay talks about replication problems and how researchers should do better. He warns about p-hacking, noise, and the difference between significance and non-significance not being itself statistically significant." While it's notable to see a leading psychology journal recognize a problem and undertake concrete editorial policy changes to address the problem, it is likely only a (necessary) first step. To be sure, the "replication issue" is certainly not confined to psychology and it's an issue that warrants continued and sustained scholarly attention.
Update 3: From the 3.4.2016 NYT (here). Reporting on a paper critiquing the 2015 Science paper challenging replicability of 100 published psychology studies. Additional commentary from Gelman (Columbia--Statistics) here, here, and here.
Update 4: Marty Wells (Cornell--Statistics) notes that, according to a recent report in Science, replication problems have expanded into experimental economics.
Comments