When I was chair of the political science department at Washington U., I spent many days fretting over entry-level hiring. The discipline was experiencing so many changes that the best way to build and maintain a cutting-edge faculty, I reasoned, was to hire those with cutting-edge skills: typically newly minted Ph.D.s.
A few of those changes—developments, really—and their implications for work on judging are as follows.
1. What we analyze. Whether produced by law professors or social scientists, the vast majority of quantitative work on judicial decisions focuses on the (typically dichotomous) bottom line: reverse (affirm), liberal (conservative), winner (loser), uphold (strike down), and so on. And now several social scientists (most prominently, Andrew Martin & Kevin Quinn) have deployed dichotomous votes to develop estimates of judicial ideal points—thereby ensuring that the "bottom line" will make an appearance on both sides of the equation.
How might we better exploit the judicial opinion? A simple answer,
as Andrew and I wrote in a memo prepared for the Emory conference, is
to take advantage of developments in other disciplines and even other
disciplinary pockets in political science. Laver et al. (2003), to
provide one illustration, have devised a method for mapping texts into
policy space—a method they implement through a suite of STATA functions
The basic procedure entails comparing word frequencies in a set of
"reference texts" (whose policy positions are known to the researcher)
with those in a set of "virgin texts" (whose policy positions are
unknown but of interest); the goal is to calculate a score (and,
crucially, an estimate of uncertainty about that score) representing
the policy position of the text of interest. Specifically, the Laver
team analyzed the manifestos of British political parties to determine
whether the parties changed their positions on economic policy between
the elections of 1992 and 1997. The results, Laver et al. write, "are
both substantively plausible and congruent with independent
estimates—even when parties made dramatic moves on policy positions."
Even more impressive is a massive (and massively funded) study of the Congressional Record by Burt Monroe and various coauthors. One paper, which sets out a method designed to overcome some of the limitations in WORDSCORES, also makes use of texts (actually speeches published in the Congressional Record) to estimate policy positions—or what the authors deem "rhetorical ideal points" (Monroe and Maeda, 2004). A newer project, with Kevin Quinn, analyzes the same documents to explore the dynamics of the political agenda, ultimately estimating, among other quantities of interest: the probability that a senator will deliver a speech on judicial nominations (a probability, as you might imagine, that has increased over time), the degree of persistence of particular issues (e.g., abortion), and the long- (short-) term impact of dramatic events (e.g., September 11) on rhetoric and policy (Quinn et al., 2006).
In both instances the researchers manage to extract really interesting information from texts to learn—and learn a lot—about political phenomenon. To what ends can we deploy these new technologies to learn about legal phenomena? An innovative paper by Kevin McGuire and George Vanberg (2005) provides a glimpse into the prospects. Using WORDSCORES, they estimate the relative policy positions of particular opinions (e.g., Lee v. Weisman registers as more liberal than Wallace v. Jaffree).
This a promising start; other possibilities are nearly uncountable. Using the Monroe team's strategy it may be worthwhile to revisit positions in decisions, whether over policy or, possibly, method (e.g., text-, intent-based approaches to interpretation). No doubt this would yield insight into the evolution of particular areas in a way that exploits the entire opinion but eliminates researcher judgment. Ideal point estimation for individual justices is also of course possible and just as likely to be informative. Turning to the dynamics of the decision-making process, virtually each and every question raised by the Monroe team has some application to the study of law. E.g., how are areas of the law interrelated?; is legal decision making incremental (as many would argue), explosive, or both?; how does a precedent-setting decision in one area reverberate across others?
2. How we analyze. Making causal inferences via observational data is of great interest to law professors and political scientists alike. But it is a challenging enterprise to say the least. There are no magic bullets but there have been advancements.
I'm especially keen on non-parametric matching. Developed in statistics, matching has a following in the social sciences but (as of yet) not much in law. One recent exception is Dan Ho's paper on affirmative action and bar passage rates (114 Yale L.J. 1997); another is our article on the effect of war on the Supreme Court (80 N.Y.U.L. Rev. 1).
Because we supply an extensive (and not especially technical) description of matching in the war paper (along with cites to more technical papers), suffice it to make two points here. First, while the basic intuition behind matching is simple, executing various matching models is not. Gary King and others have helped by devising easy-to-implement software. But they still leave the researcher with choices—crucial choices—over approaches to matching. So, as always, reading about and understanding the method is a must before implementing it. Second, and simply to reiterate the obvious, there are no perfect solutions to the "fundamental problem of causal inference" with observational data---matching not excepted. As with any method researchers should understand its drawbacks and, perhaps, assess its results against other models (see, e.g., Table 7 of the war paper).
3. How we report what we analyze. Speaking of Gary King, several years ago he and two coauthors (Michael Tomz & Jason Wittenburg) drew attention to a serious shortcoming of articles in political science journals: they stressed statistical significance over substantive effect.
E.g., in looking at Senate votes over Supreme Court nominees, Jeff Segal, Rene Lindstadt, Chad Westerland, and I found that the candidates' lack of qualifications decreased the likelihood of a yea vote.
The typical way to report this result?:
"The coefficient on Lack of Qualifications is statistically significant at the .05 level."
"Other things being equal, when a nominee is perceived as highly unqualified the likelihood of a senator casting a yea vote is only 0.18. That probability increases to a near-sure bet yea vote (0.92) when the nominee is highly qualified."
Or even better:
"Other things being equal, when a nominee is perceived as highly unqualified the likelihood of a senator casting a yea vote is only about 0.18 (±.05). That probability increases to a near-sure bet yea vote (0.92, ±.02) when the nominee is highly qualified."
To advance their project, Gary and his colleagues developed Clarify, a freely available software program that converts statistical results into estimates of quantities of interest, along with estimates of the uncertainty surrounding those estimates (such as confidence intervals), via simulations (repeated sampling of the model parameters from their sampling distribution). Gary and a different set of colleagues have now released an R package called Zelig that accomplishes the same thing.
The basic idea of stressing substantive effect (and uncertainty over that effect) is gaining traction in studies of judging. Believing that we all can do even better, Andrew Martin and I are working on a two-part article for the Vanderbilt Law Review. Our overarching goal is to adapt the burgeoning literature in the social and statistical sciences on the presentation of results and data to the unique interests of legal scholars.