Picking up on the methodological contribution I think psychology can make to ELS, it seems as though psychology emphasizes experimental work to a greater degree than current ELS. Discussing the excellent methodology workshop that Lee Epstein and Andrew Martin run, someone recently joked that it covers 95% of all the methodology ELS scholars need to know. I’d quibble with that if, as I understand (without having attended), there’s little coverage of experimental design and analysis (e.g., analyses of variance or ANOVA’s). To be very clear, all I mean by that is I think the experimental approach could be a larger proportion of ELS work than it currently is.
Experimental research, of course, can contribute greatly to our understanding of a number of topics and phenomena (jury understanding of sentencing instructions; eyewitness accuracy; racial discrimination by judges and jurors; effects of policy interventions; different influence of legal procedures both in and out of the courtroom, etc., etc.). Researchers can manipulate specific variables and test their causal effect, tweaking particular aspects to really parse out specific effects that may be more precise than multiple regression. In some instances – when a researcher is working with existing databases – experiments can be less constrained by data limitations, can be more proactive in data collection.
Among the downsides, of course, is the perception that experiments aren’t “externally valid” – they don’t sufficiently reflect the real world that we can profitably generalize. Sometimes, true. But they sometimes do, and sometimes can be conducted in real-world settings. Other times, finding an effect in the lab suggests it might be even stronger in the real world: the example often given is with comprehension of capital sentencing instructions – when highly educated mock juror subjects from “elite universities” don’t understand them, how much more of a concern might it be for less educated jurors faced with such instructions in the real world? In yet other contexts, again with mock juries, differences between undergraduate samples and community member samples are substantially less than critics have suggested (Bornstein, 23 Law & Hum. Behav. 75 (1999)).
I don't want to oversell the experimental approach, but it could certainly be another arrow in the ELS quiver. So to speak.
The experiments that could have the most impact on studies of, at least, appellate courts are the classic ones by Tversky and Kanneman on how people actually make decisions. I've always thought that the revelation that real human beings are, you know, much more risk adverse then the usual rational actor models we use to analyze judicial behavior would predict is of supreme importance in trying to decide why judges make decisions the way they do. Combine that with the inherent uncertainty of the environments they make decisions in and you get a VERY different picture of how judicial decision-making works.
Anyone interested in this, give me an e-mail. I've already done work on the uncertainty part. Put in the experimental results and we might have something worth looking at.
Posted by: Tracy Lightcap | 10 March 2007 at 08:55 PM
In my opinion one can never "oversell" the value of experiments in this endeavor.
I would argue in addition to all the excellent uses Professor Blumenthal mentions experiments are invaluable for understanding how legal decision makers reason about cases. Of course it's up to those of us that use such methods to demonstrate their value in a world of multiple empirical methods. But that's as it should be.
As Professor Blumenthal and Bill mention experiments can help us answer questions that other approaches to legal decision making have not addressed. There is no reason we should be afraid to use them to better understand cognitive mechanisms that drive findings from other empirical approaches.
Posted by: Eileen Braman | 09 March 2007 at 03:39 PM
Jeremy,
Nice post. It was me who wrote about Martin and Epstein "supplying 95% ..." etc. That comment assumes staying in the world of observational data, which reveals narrow blinders I admit.
It is noteworthy that controlled experiment is always touted as the gold standard for causal inference. I realize that laboratory psychological studies are not the same as medical clinical trials, but I agree that they provide our best window on processes that are otherwise impossible to observe and measure.
Thanks for correcting my bias. bh.
Posted by: William Henderson | 09 March 2007 at 02:58 PM