First, I'd like to thank Jason for inviting me to be a guest blogger this week. I've certainly lost more hours than I can count to reading blogs, and I'm looking forward to finally being part of the problem myself.
I thought I would start by asking a question that's been puzzling me for a while now: how exactly do we know what we know? Or, perhaps put a different way, how can we make it clear to non-empiricists what it is that we actually know, or what we at least know not to be so?
But rather than speaking in near-koans, let me give an example. I spend my time digging around in criminal law and sentencing issues, so I'm constantly coming across debates over whether changes in criminal law can in fact deter (possible) offenders. Putting aside debates over the death penalty, I think almost all economists would agree that it does. And they would have a wealth of papers to point to: not just the extensive work of Steve Levitt, but also of people like Joanna Shepherd, John Donahue, and many others.
But too often, in law review articles that discuss deterrence, I see the following: several articles that purport to show that changes in criminal law deter are paired with several articles claiming to show the opposite, allowing the author to assert that the evidence is, at best, weak that criminal law can deter. But so often these pro-and-con matchups are pairings of apples and oranges: at least to me, the deterrence-works papers often are systematically sounder when it comes to methodology and reliability (or many of the deterrence-doesn't-work papers are death-penalty specific, although they are used in support of a more general claim about deterrence).
In other words, empirical work has its own close correlary to the Newton's Third Law: for every empirical claim, there is an exactly opposite claim. But it's not complete correlary, because not all opposing claims are equal. Which brings me to my question for today: how can we show non-empiricists, simply and without bald assertions of authority ("deterrence is correct because my empiricists have better credentials than yours do!"), which results are more reliable than others? I've learned how to separate wheat from chaff in the criminal law context because I've read enough empirical work in the field to know what particular problems to look for, and because I have (I like to think) a decent sense of how an author should correctly control for them. But most non-empiricists in the field might not know what problems to look for, and even if they do (they know always to look for self-selection of adopting states or endogeneous timing of adoption), they likely don't know if the author has adjusted for the problem well. This is by no means a criticism, but it is a problem, especially as policy-based arguments become more and more important.
In medicine, the solution has been to adopt evidence-based logic, which is a set of objectively established standards all research must meet; if it doesn't, it can be objectively tagged as suspect (in this way, it's better even perhaps that peer review, which relies on the subjective standards of experts). Can such standards be created for social sciences research? And if so, what would they look like? As more and more empirical work enters the literature (and as increasingly user-friendly empirical programs allow more and more people to try their hands at it), the need to identify to non-empiricists (or empiricists from other areas of legal research) which results are reliable and which are not becomes increasingly important. And I'm really interested in what options we have available.