Apropos of Sean Wilson's comment about measurement here, there is some debate among methodologists as to whether our goal should be "explaining variance" or "prediction" (see, e.g., the American Journal of Political Science forum in 1991 including Gary King and Robert Luskin and the debate in Political Analysis in 1991 including Michael Lewis-Beck, Andrew Skalaban, and Gary King, among others). But, let's talk prediction. Since we're talking to law professors and social scientists here, I think it would be entertaining to raise the issue of "who best to" and "how best to" predict Supreme Court decisions and, happily, Lee Epstein, Andrew Martin, Kevin Quinn, Pauline Kim, and Theodore Ruger have supplied fodder for our discussion. In a symposium in Perspectives on Politics, the group describes their forecasting project in which the decisions of the Supreme Court in the 2002 term are predicted by political scientists and by legal academics, each in their own way. (Apparently this received much more attention in law schools than in political science departments.) The political scientists use a statistical model based on past voting and lower court direction while the legal academics use "experts" to predict the decisions. Long story short, the political scientists beat the legal academics in terms of outcomes, though the legal academics did slightly better in predicting individual justices' votes. However, Linda Greenhouse, in an article in the symposium in PoP did just as well as the political scientists by relying on her impressions of the justices' positions as displayed by their questions at oral argument. Anyway, in case you haven't seen the study, the website is here, and Epstein's article introducing the project in Perspectives on Politics here. Want to talk about it here?
Well there is a statistic for this. Tau-p is clearly better for judicial modeling than lambda-p. Using career numbers in civil liberties cases to guess the entire docket increases predictive efficiency by .2449, a modest amount. It jumps to .35 for the civil liberties docket.
I didn't read the forecasting articles, but I would be careful about one thing: career numbers are significantly better predictors for 2002 versus 2003. I just completed a time series for my Chicago paper that looks at how well ideology models perform for every year from 1946 - 2004. To the extent that the article above relies upon 2002 data, they have picked a lucky year, at least with respect to the civil liberties voting portion. One of the interesting things about this stuff is that the ability of ideological models to classify votes varies remarkably from year to year. I found that to be a very interesting discovery. Also, there appears to be a general downward trend in their overall goodness of fit as well, due to the fact that the most directional justices (Warran court and now Rehnquist) are no longer present.
Posted by: Sean Wilson | 11 April 2006 at 11:49 AM
That legal academics fared comparatively poorly in terms of predicting SCOTUS outcomes, especially when compared to Supreme Court specialists (who, of course, have to survive in the market to make a living), in the Ruger et al. study did not surprise me all that much. What did surprise me, however, was the various models' somewhat modest overall predictive force (again, re: outcomes). After all, if I set out to predict Supreme Court decisions (outcomes) I am pretty sure I could accurately predict outcomes in approximately 67 percent of all cases with a single variable--"reverse". Before even beginning the task of potentially fancy poli sci model building (or law prof "tea-leaves" reading) I'm already predicting approximately two-thirds of the outcomes. Thus, in assessing a "predicting SCOTUS outcome" model's efficacy the appropriate reference point should be, I would suggest, the 67 percent baseline established by historic aggregate reversal rates.
Posted by: Michael Heise | 11 April 2006 at 09:34 AM
There's also a Law Review version of their research:
Theodore W. Ruger, Pauline T. Kim, Andrew D. Martin & Kevin M. Quinn, The Supreme Court Forecasting Project: Legal and Political Science Approaches to Predicting Supreme Court Decisionmaking, 104 COLUMBIA LAW REVIEW 1150 (2004).
Posted by: Jason Czarnezki | 11 April 2006 at 08:48 AM