A recent post at Andrew Gelman's (Columbia--Statistics) blog (here), dives into fascinating, timely, and increasingly contested terrain involving "replication" studies. A quick summary of the salient background follows.
In a 2017 paper (here) Ballarini & Sloman claimed that it "failed to replicate" earlier findings reported by Dan Kahan (Yale), Ellen Peters (Ohio State) et al. (here). However, at various points in subsequent email exchanges among the various sets of authors, published in Gelman's blog post, Ballarini & Sloman appear to have either: a) "walked back" their published critique of Kahan, Peters et al., or, at the very least, b) noted that word choices in their published claim ("failed to replicate") may have been in apt or inexact. As the private email exchanges (and, perhaps, "clarifications") were insufficient to fully address Kahan's concerns, in the end Kahan & Peters (2017) elected to publish a formal "reply" paper of their own to make their case against Ballarini & Sloman's 2017 claims about "replication" (or, more specifically, the lack thereof).
Setting aside the particulars of this specific case, this incident reflects a broader point about the need to gain greater clarity and precision on what it means to "replicate" in the context of academic research. And this is especially true as various academic fields come to grips with a "replication crisis."