One of Andrew Gelman's persistent "rants" (e.g., here and, more recently, here) involves his skepticism about regression discontinuity. Regardless of how one lands on Gelman's critiques, regression discontinuity designs remain a staple in many graduate economics programs and, regardless, increasingly appear in empirical journals that straddle a range of academic fields. As well, as various comments in the conversations that Gelman's posts ignited make clear, reasonable minds continue to differ a bit on regression discontinuity's efficacy (and proper use).
Nested in Gelman's more recent post on this issue, however, is a much more important, far-ranging, and, frankly, less contested point. Gelman notes: "To put it another way: If you want to make a big claim and convince me that you have evidence for it, I need that trail of breadcrumbs connecting data, model, and theory." Say what you will about the enduring contest over regression discontinuity and its potential to fuel overconfidence with a paper's causal identification strategy, Gelman's larger (background) point that researchers should avoid "making a strong claim that doesn’t make sense, is not supported by the data, and is unlikely to replicate" is an obvious one that warrants attention.
Comments