In many ways, legal scholars have been unfairly criticized by scholars in other disciplines in our use of empirical methods. I have personally read a great deal of very good empirical analysis conducted by legal scholars, often providing a perspective about legal institutions and the law that is sometimes missing from the work of other disciplines. But I have also noticed one area of empirical methodology where we can no doubt improve: formal hypothesis testing.
We often criticize our students for failing to state a clear thesis and then referring back and adhering to that thesis throughout their papers. An analogous problem sometimes exists in empirical legal scholarship, in which some papers fail to clearly state the hypothesis or hypotheses being tested. In most political science papers, for example, I have noticed that the authors go to great lengths both to clearly state their hypotheses and the rationales, scholarship, or intuitive reasons supporting such hypotheses. Now I can't say that those portions of empirical papers are the most riveting, but they do guide the reader as the authors use various methodological tools to evaluate and test their theories. I personally run my empirical papers past one or more of my political science colleagues (often including this week's visiting blogger: Tim Johnson) prior to submission in part to ensure that I have done an adequate job of supporting and stating my hypotheses in my own papers (which is not always the case).
A second pitfall I have noticed in empirical legal scholarship often stems from the first. Papers will occasionally throw around the language of hypothesis testing, such as Type I (i.e., a false positive) and Type II error (i.e., a false negative), without actually formally testing or even stating any hypothesis. Often these are just errors of omission because it is sometimes clear from the text or the methodological tools used by the empiricist what theories are being tested, even if it requires some interpretation by the reader. On other occasions, however, it is not at all clear what hypotheses are being tested, including what the null hypothesis looks like, and thus it becomes very confusing to the reader to use terms such as Type I and Type II error.
I do not mean to suggest that most papers suffer from either or both of these two pitfalls, but I have seen them in enough articles that I thought it worth mentioning. I am also sure that there are other pitfalls of formal hypothesis testing that I have failed to cover in this post. In addition, I think that legal scholars, as a group, are getting better at avoiding some of these problems as it is highly encouraging that scholars in other disciplines are increasingly citing and discussing our work. Nonetheless, because legal scholars are sometimes criticized for our lack of empirical rigor, I felt like revisiting some of these issues would be beneficial to our readership. Comments, as always, are most welcome.
One reason for the lack of formal hypothesis testing is that much ELS work seeks to sort out masses of data to paint a picture of what is going on. This can be an important contribution, but it need not call for formal hypothesis testing and it may be more honest to explore it as an effort at EDA rather than testing hypotheses. Other possible reasons include data availability and quality issues which mean that well-specified hypotheses cannot be constructed for testing either because crucial concepts can only be weakly operationalized or important control variables are lacking, and the fact that many lawyers are not deeply familiar with the sociological and other theoretical perspectives in which all but the simplest hypotheses should be rooted.
Rick
Posted by: Rick Lempert | 23 May 2007 at 11:22 AM