Having thrown out some musings about a potential future project, I want to raise a question about a project I am currently working on. This project arises out of my desire to evaluate, across doctrinal areas and legal issues, under what circumstances the Supreme Court announces standards without applying them to the facts at hand. The United States Supreme Court Database, is of course, the logical place to start for anyone doing quantitative empirical research on Supreme Court case law. But the architecture and coding of the database are not conducive to the kind of doctrinally-focused analysis I want to do. As a result, I'm engaged now in exploring the implications of the limitations of the database for scholarship like mine. I'd be very interested in learning about other scholars' experiences and wish lists. Are there doctrinally-focused empirical questions you've tried to answer or want to address but that are difficult or impossible to undertake with the database as it -- as well as its counterpart, the Justice-Centered database -- exists now? I’ve read dozens of published works – but what about the projects that have not taken off?
One of the virtues of the blogosphere is that valuable research is often bundled together in a convenient form and freely shared with fellow travelers. Paul Caron has done just that by aggregating an impressive list of teaching fellowships and readings on entering the legal academy.
[New rule: if you do something communitarian, I post your picture. Thanks Paul!]
Paul Caron has a post an interesting forthcoming study by Andrew Oswald (Warwick Economics) in Econometrica on the relationship between journal prestige and citations. See Andrew Oswald, An Examination of the Reliability of Prestigious
Scholarly Journals: Evidence and Implications for Decision-Makers. Since a single body in the UK makes funding decisions for university research, there is significant pressure to rely upon measures of productivity that take into consideration the relative prestige of faculty members' placements. Because these journals are peer-reviewed, and the hierarchy of journal prestige in each discipline is fairly established, placement -- so the argument runs -- should be construed as a strong signal of quality.
Oswald's study is important, though the abstract is -- in my opinion -- quite misleading. This claim caused me to the read the whole study:
The paper finds that it is far better to publish the best article in an issue of a medium-quality journal like the Oxford Bulletin of Economics and Statistics [a middle-of-the-pack journal] than to publish the worst article (or often the worst 4 articles) in an issue of a top journal like the American Economic Review.
Oswald used citation counts from articles in six economics journals over a twenty-five year period to evaluate the reliability of placement as a quality signal. But there is no implicit suggestion in the article that a scholar would be better off trading down.
Basically, it boils down to this: Placement is correlated with citation, but as a proxy for quality, there are large Type I and Type II errors. Articles in AER and Econometrica (#1 and #2 in prestige) did indeed get more citations. Yet, roughly 16% of the articles in the less prestigious journals garnered more citations than the median AER or Econometrica articles. The most cited articles in the less prestigious journals garnered 10 times or more the citations than the four least cited articles in each issue of AER or Econometrica.
First of all, many thanks for the opportunity to guest blog here. I've found ELSblog to be a valuable resource for information and an invaluable resource for thought-provoking reading. I'm generally a "lurker," but guest blogging will force me out from the shadows.
Like many others, I was trained as a lawyer and practiced law before I became an academic. My interest in empirical legal scholarship arises in part from experiences I had and things I observed as a law clerk and as a litigator.
One of the great frustrations lawyers sometimes experience is the sense that judges are not paying attention to the specific facts and evidence of the case. And in some areas of law, at least, academics also take note. In the world of employment discrimination law, for example, one need not look hard to find lengthy discussions and critiques of courts' inconsistent, unpredictable, or just plain wrong application of the summary judgments standards to prevent plaintiffs from taking their evidence to a jury or, in some cases, to grant judgment as a matter of law and reverse jury verdicts in plaintiffs' favor. But these are complaints about the application of law to fact, not about inaccurate reporting of the facts themselves.
At the same time, numerous academics have undertaken important efforts to identify the facts that appear to matter to judges in different contexts. To name but two examples: Jeffrey Segal's well-known work on search and seizure looks at the relationship of certain facts (whether there search was warrantless, for example, or whether it was a search of a home or a car) to outcome in the Supreme Court's 4th Amendment case law. Lauren Edelman and others are studying how trial and appellate court judges in employment discrimination cases refer to employers' anti- discrimination policies and programs. But what these efforts do not account for, by definition, is what the judges do not say, what they do not mention in their summary of the facts.
Of course, some might say, who cares what the judges don't say. What they do say is what was salient to them, and what they do say is what often becomes important in the case law. (This latter point is one of the arguments of Edelman et al.) What they don't say may matter to the disappointed party and lawyers, but it is of no consequence otherwise.
But it seems to me that the question of whether judges -- consciously or unconsciously -- leave out (arguably) relevant facts is directly relevant to several points of great interest to scholars as well as to lawyers. First, it is directly relevant to the question of how judges decide cases. Second and relatedly, it may shed light on the extent to which they are political in their decisionmaking. Third, it should, I think, force hard thinking about the normative question of what we want judges to do. Do we want them to make predictions about how juries will decide? Do we want them to announce rules of law applicable in the future? Do we want them to focus scrupulously but narrowly on the case before them? Are these goals inconsistent with each other?
The answers to these questions may vary depending on the case, the court, and the issue, of course. Chief Justice Roberts has recently made some news with his call for more narrow, focused decisions -- and evoked the criticism that the Supreme Court's role requires a broader, less case-specific approach. In fact, in my view, the question of whether judges accurately report the facts is more important in the trial and appellate courts than in the Supreme Court.
If I'm overlooking something, of course I'd welcome references to work that does try to address the question of what gets left out of judicial opinions. But as we in empirical legal scholarship are struggling with the question of how to adequately take into account of what is in judicial opinions (how to operationalize the law), let's not forget that opinions must distill mountains of evidence and piles of briefs into a few pages of summary. Things must fall through the cracks. I'm wondering -- is there a pattern to what gets left out and do those things matter?
Our guest blogger this week is Carolyn Shapiro, Assistant Professor of Law at the Chicago-Kent College of Law. Professor Shapiro earned a B.A. with general and special honors in English from the University of Chicago, an M.A. from the University of Chicago Harris Graduate School of Public Policy and a J.D. (high honors) from the University of Chicago Law School, where she was articles editor of the University of Chicago Law Review and a member of the Order of the Coif.
After graduation, Professor Shapiro was a law clerk for Chief Judge Richard A. Posner of the U.S. Court of Appeals for the Seventh Circuit and for Justice Stephen G. Breyer of the U.S. Supreme Court.
Shapiro's scholarly interests include federal courts and labor and
employment law. She teaches professional responsibility, employment
law, and legislative process.
Over at Concurring Opinions, Dan Solove writes about Cass Sunstein's op-ed in the Washington Post. Dan finds the following statistic provided by Sunstein to be "quite amazing":
"In the past year, Wikipedia, the online encyclopedia that 'anyone
can edit,' has been cited four times as often as the Encyclopedia Britannica in
judicial opinions, and the number is rapidly growing."
This statistic is accurate (well, maybe he should have said nearly four times as often). Lexis shows the following hits:
"encyclopedia britannica" and date(geq (2/23/06) and leq (2/23/07)) = 21 Hits
wikipedia and date(geq (2/23/06) and leq (2/23/07)) = 81 Hits
Sunstein declares that "Wikipedia has become the most influential encyclopedia in the world, consulted
by judges as well as those who cannot afford to buy books." While citation counts are often useful and I have used them in my own research, this provides a forum to recognize their limitations. For example, in Gashi v. U.S. AG, 2007 U.S. App. LEXIS 423, the hit comes from footnote one which states: "Wikipedia is a free internet encyclopedia that is collaboratively written by its readers and can be edited by anyone." Is this the type of citation that shows wikipedia has "influence"? For the most part, however, most of the citations to wikipedia arise from the court providing a citation for factual information.
I was fortunate enough to attend the Marquette Law School Hallows Lecture earlier this week and hear Judge Carolyn Dineen King (5th Circuit, Carter appointee) lecture on federal judicial selection and its ramifications for judicial independence. In a nutshell (you can watch the speech or read the transcript here), Judge King argued that the politicization of the judicial selection process has lead to judges who feel, consciously or not, bound to make decisions in an ideological fashion and has resulted in some judges on some courts who engage in "clique voting." She argues that this is MORE problematic at the circuit court level because the heavy workload and the lack of accountability (to each other or to the public) provides the potential for one or two "hard-wired" judges to make a real impact on the law. She argues that this causes problems for the rule of law and the legitimacy of the court system.
I bring this to your attention both because it was an interesting speech and because I think Judge King's comments raise an interesting question: which comes first? Does politicization of appointments cause ideological voting, OR did ideological voting prompt the politicization of judicial appointments? Seems to cry out for some empirical research, no? (Hope to get at this soon, so any reactions/thoughts/data sources are welcomed!)
I just read a very interesting paper by a colleague and wanted to bring it to the attention of the readership because of its very important findings. In "Have We Come a Long Way Baby: Female Attorneys before the United States Supreme Court," John Szmer, (UNC-Charlotte), Tammy Sarver (Benedictine) and Erin Kaheny (UW-Milwaukee) seek to determine whether attorney gender matters to U.S. Supreme Court justices in their decision making. Here's their abstract:
Numerous statistics indicate the presence of gender bias in the U.S. legal profession. To this date, however, studies addressing the mechanisms of this bias have been noticeably absent. In particular, little is known as to whether attorney gender significantly affects the likelihood of litigant success in appellate courts, including the nation’s highest court.In this paper, we test two alternative theories of the influence of attorney sex: gender schemas and different voice.We find that Supreme Court justices are less likely to support litigants represented by women. Our findings suggest that litigation teams that have a higher proportion of female attorneys are less likely to win before the Court. In addition, this bias appears to be highly conditional on judicial ideology. Conservative jurists are more likely than liberal jurists to vote against litigation teams with a higher proportion of women.
An extremely significant contribution to both judicial decision making AND questions of the role of gender in American politics. I commend it to you.
I am an avid reader of the MoneyLaw blog, which aspires to apply Moneyball principles to the legal academy.
For those readers who are unfamiliar with Moneyball, it refers to the use of statistical methods to identify and exploit inefficiencies in the market for baseball talent. In a widely read review essay on its potential implications for law schools, Paul Caron and Rafael Gely assembled statistical evidence showing that the hiring heuristics used by most law schools (e.g., law school attended, fancy clerkship) were poor predictors of quantity/placement of future scholarship. Rather, the best predictors of future productivity were pre-academy productivity and the publication of a student note.
Aside from Caron & Gely's preliminary work, however, the collective ruminations of the Moneylaw contributors have not (yet, anyway) articulated a theoretically coherent basis for how Moneyball/Moneylaw principles can produce a more successful law school franchise. (Jim, Jeff, Ted, Paul, Nancy, Al, I say this with love in my heart.) Since we don't have wins and losses or a World Series title, the most obvious theoretical shortcoming is how success is measured, both in the short and long term. (In an earlier post, I suggested money as the best longterm metric.) Once this theoretical work is in place, empirical methods can be brought to bear to generate the appropriate chess moves.
So I was quite surprised to receive a phone call the other day from an administrator at a non-Tier 1 school whose primary charge is identifying data-driven ways to improve the functioning of the school. For the past several years, the law school has paid for this administrator to acquire sophisticated statistical training and build the requisite datasets. (Jim Chen, you have some serious competition!) The approach is pretty simple: pick an outcome that matters and generate an inductive theory through data mining. This "methodology" is not very academic, but it is how Moneyball was invented; it also reflects the approach of many successful hedge funds.
So what outcomes matter? Here are a few suggestions, some borrowed from the Moneyball/Moneylaw administrator: