Rich, Amy, and Susan have done a great service by writing their study. My comments here focus on a specific aspect of the paper: the issue of causality. The paper itself makes a number of explicitly causal claims, referring (for example) to the "most influential causal factors" that might determine ratings, and to the construction of "a series of causal models" (p. 11).
My narrow point is that models of which they speak aren't really causal models at all; they are straightforward regression-type models, of the sort social scientists do a lot of. That isn't a criticism of what they've done, but rather just an observation, albeit one that suggests a simple change in approach.
Manipulability theories of causation (perhaps most famously set out in Holland's (1986) paper) -- upon which most of the current approaches to causal inference are based -- certainly have their critics. But they have a natural application to the central question of the paper: What ABA rating would nominee X have received had s/he been appointed by a Democratic president, rather than a Republican (or vice-versa)?
This (to me) suggests that the paper is an obvious candidate for a matching-based analysis of the influence of (e.g.) party on ABA ratings. Such an approach is (a) very easy to do, as a practical matter, (b) allows one to control for other potentially confounding factors, and best of all (c) provides a direct answer to the question posed above that can be interpreted in causal terms. These approaches are increasingly widely used in empirical legal studies, and seem (to me) to hold particular promise in addressing the authors' central question.
love to see this discussion! It’s great to see you all working through the issues and also, it’s great to see recommendations for testing. In the end, it’s what your actual users do and prefer that should be your biggest driver in making these decisions.
get degree form home
Posted by: mike | 11 February 2010 at 01:28 PM
Bias and the Bar are different. Anyway, Nice information that you share on this article. I like to thank you for posting this great article. Good information.
Posted by: Male Enhancement | 23 January 2010 at 05:59 PM
Coincidentally, I recently used just the kind of strategy Chris is talking about in a project for my research methods class. We are evaluating the new DUI Court in Troup County. Such courts are exemplars of what Henderson, et al., called diagnostic adjudication and we have plenty of data about the probationers themselves. Problem = we have counterfactual difficulties: the defendants we could look at from the old court system had no exposure to the DUI Court and (drat!) nowhere near enough of the DUI Court defendants were carryovers from the old system. We had to predict how likely our DUI Court defendants would have been to recidivate if they hadn't been in DUI Court. Which, unfortunately for us, they had been.
We matched the State Court defendants from before with the DUI Court defendants today using MatchIt and simulated the counterfactual using the logit routines in Zelig. (Well, to be more exact, I did that, but the students actually understood what was going on and followed me when I demonstrated.) Chris is right: it couldn't have been easier from a computational standpoint. The real problem was the students putting together a description of what happened in prose our clients (and the students, of course) could understand. Our final report will be pretty much what they asked for and at virtually no cost to the court.
So, if any of you are apprehensive about the matching processes Chris is talking about, don't be. Just learn a little R (the really hard part; I/O can be tricky) and get to it.
Posted by: Tracy Lightcap | 11 May 2009 at 01:45 PM
Chris: Excellent post. And great idea to link to actual code.
Posted by: MH | 30 April 2009 at 02:37 PM