First, I'd like to thank Jason for inviting me to be a guest blogger this week. I've certainly lost more hours than I can count to reading blogs, and I'm looking forward to finally being part of the problem myself.
I thought I would start by asking a question that's been puzzling me for a while now: how exactly do we know what we know? Or, perhaps put a different way, how can we make it clear to non-empiricists what it is that we actually know, or what we at least know not to be so?
But rather than speaking in near-koans, let me give an example. I spend my time digging around in criminal law and sentencing issues, so I'm constantly coming across debates over whether changes in criminal law can in fact deter (possible) offenders. Putting aside debates over the death penalty, I think almost all economists would agree that it does. And they would have a wealth of papers to point to: not just the extensive work of Steve Levitt, but also of people like Joanna Shepherd, John Donahue, and many others.
But too often, in law review articles that discuss deterrence, I see the following: several articles that purport to show that changes in criminal law deter are paired with several articles claiming to show the opposite, allowing the author to assert that the evidence is, at best, weak that criminal law can deter. But so often these pro-and-con matchups are pairings of apples and oranges: at least to me, the deterrence-works papers often are systematically sounder when it comes to methodology and reliability (or many of the deterrence-doesn't-work papers are death-penalty specific, although they are used in support of a more general claim about deterrence).
In other words, empirical work has its own close correlary to the Newton's Third Law: for every empirical claim, there is an exactly opposite claim. But it's not complete correlary, because not all opposing claims are equal. Which brings me to my question for today: how can we show non-empiricists, simply and without bald assertions of authority ("deterrence is correct because my empiricists have better credentials than yours do!"), which results are more reliable than others? I've learned how to separate wheat from chaff in the criminal law context because I've read enough empirical work in the field to know what particular problems to look for, and because I have (I like to think) a decent sense of how an author should correctly control for them. But most non-empiricists in the field might not know what problems to look for, and even if they do (they know always to look for self-selection of adopting states or endogeneous timing of adoption), they likely don't know if the author has adjusted for the problem well. This is by no means a criticism, but it is a problem, especially as policy-based arguments become more and more important.
In medicine, the solution has been to adopt evidence-based logic, which is a set of objectively established standards all research must meet; if it doesn't, it can be objectively tagged as suspect (in this way, it's better even perhaps that peer review, which relies on the subjective standards of experts). Can such standards be created for social sciences research? And if so, what would they look like? As more and more empirical work enters the literature (and as increasingly user-friendly empirical programs allow more and more people to try their hands at it), the need to identify to non-empiricists (or empiricists from other areas of legal research) which results are reliable and which are not becomes increasingly important. And I'm really interested in what options we have available.
This is a question that I and a team of other researchers have been investigating for a while. Our answer is: cultural cognition. As a result of various overlappling psychological and social mechanisms, individuals conform their view of the "facts" on disputed policies to their cultural evaluations of the behavior or laws in questions. We have data, from a national study of 1800 persons, that reltes this phenomenon to differences of views on gun control, death penalty, environmental regulation, etc. The data are summarized and papers uploaded at http://research.yale.edu/culturalcognition. (One paper that supplies a good overview is Cultural Cognition & Public Policy (http://papers.ssrn.com/sol3/papers.cfm?abstract_id=746508).
BTW, we used lay persons in our studies, but Hank Jenkins-Smith, a political scientist at the Bush Public Policy School at Texas A&M, has gotten similar results looking at scientists evaluating enviornmental policies.
Anyway-- I'll be guest blogging in June to present more information on this line of research.
Posted by: Dan Kahan | 08 April 2006 at 11:02 AM
In an article a couple years ago I've suggested that in many instances presenting meta-analyses is one way to address - not solve - the "dueling data" issue. When a quantitative synthesis of a particular body of research is presented and analyzed, it helps give a bigger picture view of that research and the accompanying issues.
More important, I think, it reduces the opportunity to cherry-pick among studies that support your viewpoint, encouraging academics and policy-makers to "speak the same language" - the language of what the overall research synthesis shows.
Finally, moderator analysis can help identify what it might be about different studies in a particular field that led them to their different conclusions.
Posted by: Jeremy A. Blumenthal | 04 April 2006 at 09:54 AM
I really should be working on my Midwest paper right now, but...
I completely agree with Chris. When I was "guest blogging" a while back, I noted that I am extremely uncomfortable with normatives -- this is exactly why. Once our inquiry is aimed at "proving" or "disproving" a notion or showing that some kind of behavior is "good" or "bad," we lose the science. In science, we need to formulate theories and hypotheses and test them and then see what happens and what normative implications might follow. We have to be, in other words, *agnostic* as to what we might find. If we can do that, and be honest about reporting it (b/c, as we all know, statistics lie -- or, at least, can be forced to lie) we'll be closer to being able to really "know" something about our subject. I do think the peer review process helps separate the wheat from the chaff, even if one does sometimes draw "bad" reviewers.
Okay, back to work!
Posted by: Sara Benesh | 04 April 2006 at 12:34 AM
Laura hit on something that, to my mind, is the most frustrating things about ELS: The tension between (American) legal and scientific cultures. Simply: Legal academics typically take an adversarial approach to scholarly inquiry that is at best inconsistent with (and at worst in direct opposition to) the more positivist/Popperian/whatever "scientific" culture in most of the social sciences. I saw it regularly at NSF, where law faculty would regularly submit grant proposals that (to paraphrase) started, "If you give me this grant, I will prove that...".
An adversarial perspective on inquiry and scholarship is fine. Better than fine: It is necessary and valuable, both for the pedagogical function law schools play and as one means of getting at normative and other kinds of questions that are not especially amenable to scientific thinking. At the same time, I'd suggest that the biggest challenge the ELS movement (are we a movement? I don't know...) faces is to reconcile these two complementary-yet-in-tension ways of thinking about legal scholarship.
Posted by: Chris Zorn | 03 April 2006 at 07:48 PM
These are excellent points. I think the review process (or lack thereof) in law reviews is partly to blame. When student-editors want to do a symposium on some topic, they think they have to have "balance" (not a bad idea when you are unable to separate good from bad scholarship).
Seeking balance can result in the clusters of articles you describe -- some on each side with little or no analysis of the quality of the different studies. This makes it easy for non-empiricists to say, "see, we just don't (or can't) know" and then move the argument to the normative. Eventually in law, we have to get to the normative, but it can be done on the basis of empiricism.
Don't we have the standards in each discipline? Maybe the trick is teaching them to legal academics so they can better evaluate research? And, convincing them the standards are meaningful for figuring out some empirical fact about the world.
We are used to taking any evidence that helps our argument, twisting it to our purpose and moving on. We have to be willing to say, "hey the (good) science proves me wrong." Or, in the alternative, "this science supports my point but it is done so badly that it is not credible evidence of my point."
So, in addition to actually learning the standards (which is hard, but do-able), there might need to be some culture shift in how we argue in law -- it is very different than how we argue in our disciplines (and much more empirically-based). Such a culture shift, obviously is much harder than simply teaching legal academics about standard deviations and margins of error.
Posted by: Laura Beth Nielsen | 03 April 2006 at 11:20 AM