The utility of the empirical verification of theories of judicial decision making is dependent on developing sound measures for key theoretical concepts. This may seem to be so obvious a statement as to be trivial but the truth of the matter is that most scholars most of the time are not appropriately self-conscious about the issue of measurement. Certainly we worry about things like inter-coder reliability (something Jason raised in a post on February 28). Further, there are, of course, all sorts of debates over how to measure the ideology of judges (a point that is relevant for Jason’s post on March 2 regarding the Supreme Court Ideology Project). (See, for example, Christopher Zorn and Gregory Caldeira’s paper “Bias and Heterogeneity in a Media-Based Measure of Supreme Court Preferences” and Epstein et al.’s paper “The Judicial Common Space.”) And, how we measure “the law” is something that is increasingly preoccupying scholars (as Sara’s post of March 13 illustrates).
A recent conversation with Paul Collins about brought the issue of measurement to my mind again. In particular, Paul argues that a common measure of salience—the presence of amicus curiae briefs—is really a measure of complexity rather than salience (in that those briefs have the potential to bring to the fore heretofore unconsidered policy or legal dimensions). In one sense, amicus curiae briefs certainly are indicators of salience, at least to the interest groups or other third parties who are filing them. But, as Sara Benesh and Harold Spaeth have argued in a 2001 American Political Science Association conference paper, key to determining an appropriate measure of salience is thinking about it in terms of the question: to whom is it salient? In their January 2000 article in the American Journal of Political Science, Lee Epstein and Jeffrey A. Segal offered a measure of salience based on media coverage. Saul Brenner and Ted Arrington, in an unpublished but widely circulated manuscript, evaluated the virtues and vices of that measure, concluding that a hybrid measure relying both on New York Times coverage and the list of major cases compiled by Congressional Quarterly was preferable. In light of the Epstein/Segal and Brenner/Arrington measures, Sara and Harold developed a prototype measure based on the syllabus of a case. Their argument is that the syllabus, though prepared by the Reporter of Decisions, Deputy Reporter of Decisions, and Assistant Reporter of Decisions, must be approved by the majority opinion author and, hence, is a better basis on which to assess salience to the justices themselves.
The lesson I draw from this work collectively is that we need to think carefully not only about the reliability of our measures but their validity, too. It may not be a particularly novel or exciting lesson but it is worth bearing in mind.
Excellent point, Wendy. And I share your lesson drawn.
Posted by: Michael Heise | 24 March 2006 at 02:42 PM