At the recent American Law and Economics Association Annual Meeting held in Berkeley, John Donohue and Justin Wolfers presented a compelling analysis of the deterrence effect of the death penalty -- or rather, the lack of evidence thereof. Comparing execution rates with homicide rates, as well as using the natural experiment of the Furman abolition period, and contrasting U.S. trends to Canadian trends, Donohue and Wolfers cast considerable doubt on whether the death penalty has any deterrence effect. But Donohue and Wolfers look only at whether the death penalty deters crime, not whether the death penalty affects other criminal law matters, such as encouraging defendants in murder trials to accept plea bargains with harsher terms than they otherwise would, as Ilyana Kuziemko recently showed.
But more interesting than Donohue and Wolfers substantive case -- they ultimately conclude that it is entirely unclear whether the death penalty causes more or less murders -- is their methodology. They replicate the analyses of a handful of central studies that show there is a deterrence effect, and run a variety of robustness checks on each. As such, their paper provides a comprehensive re-examination of the primary data on the deterrence effect of the death penalty on crime.
The results they provide are startling and concerning. Donohue and Wolfers suggest that many studies report only those results that are produced when running the most supportive robustness checks. Running a range of robustness checks on each study shows the results vary wildly. Similarly, there appears to be a self-serving selectivity in time periods examined: the considerable variance in different historical phases leads to oftentimes opposing results.
of these results are unsurprising. The great difficulty for any
empirical test of the deterrence effect of the death penalty is that
any such work necessarily relies on a data set made up of a very
strange collection of cases: those few cases that reach completion --
that is, result in an execution -- despite the labyrinthine Supreme
Court jurisprudence on the topic. The data set is necessarily skewed to begin with.
Donohue and Wolfers go further: their results suggest that some of the
authors they analyze may have contributed their own perversions into
the empirics: some studies, when recalibrated for coding errors,
collinearity etc, actually showed the opposite result to that claimed.
aside the controversies concerning any intent to mislead, this
comprehensive replication of others’ data is a largely new addition to
empirical legal studies. The approach is taken from medical fields, where such conglomeration and re-examination of prior findings is more common. The approach is very time intensive, necessitated in the medical field by the increased dangers of spurious conclusions.
This approach may have come late to the law due to the somewhat lower stakes, but is an exciting new development. The one unintended downside of the approach is that scholars who are still antagonistic to the introduction of empirical scholarship into the legal field have pounced on this showing of the apparent malleability of statistical analysis to reject the empirical endeavor altogether. Unfortunately, this effect may be difficult to overcome, given that there is unlikely to be a publishing market in these comprehensive studies if the results are to confirm the pre-existing orthodoxy. This is not to discourage the practice, merely to note that there are dangers that may result, beyond some bruised academic egos.