While the complex (and heated) debate concerning law schools' future(s) will continue to unfold, Simkovic & McIntyre's recent review of Tamanaha's provocative book is noted for its empirical orientation.
To test the model, the authors eschew the standard data source -- Westlaw or LEXIS -- noting the “denominator problem” that arises as these online databases typically do not capture all district court opinions, orders, and judgments. Instead, the paper benefits from an author-constructed "dataset of PACER docket records of all private securities lawsuits filed in federal court from 1994 to 2008."
Results from their analyses reject the naïve model: "courts make the required findings in less than 14 percent of cases in which such findings were required by law. This suggests judges either do not know of the law or, if they do, fail to follow it. We also show that required Rule 11(b) findings about sanctions are made overwhelmingly in cases where sanctions would be least likely – that is, in orders approving settlements – and such findings are extremely rare in cases where sanctions would other be more likely – that is, where motions to dismiss are granted. To explain this seeming paradox, we offer an account that highlights crucial ways in which the incentives of the judge and of the attorneys may interact in complex cases."
A few years ago Andrew Gelman (Columbia--Statistics/Poli Sci) wrote a brief memo filled with general advice on how to write an empirical research paper. While the memo was originally designed for "young" researchers (incident to an American Statistical Association program), it includes reminders germane to even seasoned (or "no longer young") researchers. While those interested in the full memo can click here and here, a few of the (excerpted) highlights follow.
"1. Start with the conclusions. Write a couple pages on what you’ve found and what you recommend. In writing these conclusions, you should also be writing some of the introduction, in that you’ll need to give enough background so that general readers can understand what you’re talking about and why they should care. But you want to start with the conclusions, because that will determine what sort of background information you’ll need to give.
2. Now step back. What is the principal evidence for your conclusions? Make some graphs and pull out some key numbers that represent your research findings which back up your claims.
3. Back one more step, now. What are the methods and data you used to obtain your research findings.
4. Now go back and write the literature review and the introduction.
5. Moving forward one last time: go to your results and conclusions and give alternative explanations. Why might you be wrong? What are the limits of applicability of your findings? What future research would be appropriate to follow up on these loose ends?
(a) Don’t write something unless you expect people to read it.
(b) This principle holds for tables and figures as well.
... You have to draw the trail from the scientific question, to the statistical question, to the data, to the inferences, back to the statistical and scientific questions."
A recent paper by Josh Fischman (Northwestern) in the Penn L Rev, Reuniting Is and Ought in Empirical Legal Scholarship, urges ELS scholars to broaden their focus to include the normative dimensions of empirical work. Specifically, Fischman calls for increased attention to explaining how "positive findings relate to normative claims." An excerpted abstract follows.
"Scholars engaged in empirical legal research have long struggled to balance the methodological demands of social science with the normative aspirations of legal scholarship. In recent years, empirical legal scholarship has increased dramatically in methodological sophistication, but in the process has lost some of its relevance to the normative goals that animate legal scholarship....
Using as examples three types of measures commonly used to evaluate judges and institutions—citation counts, reversal rates, and inter-judge disparities—this Article describes widespread flaws in efforts to connect the ‘is’ and the ‘ought’ in empirical legal scholarship. The Article argues that normative implications should not be an afterthought in empirical research, but rather should inform research design. Empirical scholars should focus on quantities that can guide policy, and not merely on phenomena that are conveniently measured. They should be explicit about how they propose to measure the goodness of outcomes, disclose what assumptions are necessary to justify their proposed metrics, and explain how these metrics relate to the observable data. When values are difficult to quantify, legal empiricists will need to develop theoretical frameworks and empirical methods that can credibly connect empirical findings to policy-relevant conclusions."
Eric Posner's (Chicago) new blog discusses a quick empirical study that was bound to quickly garner attention (certainly here). For the purpose of his study, Eric understands social value exclusively in terms of the S&P 500 Index. He then examined whether notable works by leading legal scholars (e.g., Dworkin and Scalia) influenced the Index. The findings, according to Eric, are "discouraging."
While prior securities fraud class actions studies under the PSLRA have found that class counsel’s fee requests and awards are lower in cases in which the lead plaintiff is a public institutional investor, prior work has not explained the mechanism that underlies this reduction in agency costs. Specifically, do public funds negotiate better terms with their chosen counsel ex ante than do other lead plaintiffs? Or are judges responsible for the reductions in agency costs, suggesting that the PSLRA may not be working as Congress intended. Lynn A. Baker (Texas), Michael A. Perino (St. Johns), and Charles Silver (Texas) take up these questions in Setting Attorneys' Fees in Securities Class Actions: An Empirical Assessment. A summary of their findings follows.
"To learn how the mechanism created by the PSLRA is working on the ground, we studied securities class actions that settled between 2007 and 2011 in the three federal district courts that processed the largest numbers of these cases: the Central District of California, the Northern District of California, and the Southern District of New York. Briefly stated, we found little evidence that ex ante fee agreements play a role in the process for selecting lead plaintiffs. At the settlement-approval and fee-award stages, however, we found that lawyers more frequently invoked such agreements to support their requested fee and that courts deferred to attorneys’ fee requests more often in cases with evidence of an ex ante fee agreement. We further found evidence of an ex ante fee agreement or of a proxy for such an agreement (specifically, the presence of a public pension fund as the lead plaintiff) to be correlated with statistically and economically significant reductions in fee requests and awards, as well as with greater judicial deference to the requested fee. Overall, the court awarded a lower fee than the class counsel requested in about 18% of the cases we reviewed, a somewhat higher percentage than we had anticipated."
Cornell colleague Ted Eisenberg finds that civil rights plaintiffs are making less -- and less successful -- use of federal courts over time in a recent paper, Four Decades of Federal Civil Rights Litigation. While much of the scholarly literature of late emphasizes recently-heightened pleading standards, Eisenberg notes the influence of increased settlements. The abstract follows.
"Civil rights cases constitute a substantial fraction of the federal civil docket but that fraction has substantially declined from historic peaks. Trial outcomes, as in other areas of law, constitute a small fraction of case terminations and have changed over time. The number of employment discrimination trials before judges has been in decline for about 30 years, a trend also evident in contract and tort cases. The number of employment trials before juries increased substantially after the enactment of the Civil Rights Act of 1991 but has been in decline since 1997. In constitutional tort cases, the number of judge trials has been declining for about 30 years; the number of jury trials has been reasonably constant over that time period. Civil rights plaintiff win rates at trial have been steady in both judge trials and jury trials for at least a decade. The success of civil rights litigation, as measured by trial win rates and settlement rates, has been quite low compared to contract and tort cases. Median awards in civil rights trials have increased more than the rate of inflation but median trial awards in both constitutional tort cases and employment cases are below the awards in contract cases and tort cases."
Beginning from the premise that "[s]cholarship on international law has undergone an empirical revolution," Adam S. Chilton (Chicago) and Dustin H. Tingley (Harvard--Govt.), in Why the Study of International Law Needs Experiments, go on to argue for more experiments in international law scholarship. Regardless of whether the authors' claims persuade, most agree that international law poses particular research design challenges and includes unusual barriers to reliable causal inference. The paper's full abstract follows.
"Scholarship on international law has undergone an empirical revolution. Throughout the revolution, however, shortcomings of the observational data that studies have used have posed serious barriers to reliable causal inference. During the same period, political scientists and legal scholars studying domestic law have increasingly employed experimental methods because they make it easier to make credible causal claims. Despite the simultaneous emergence of those trends, there have been relatively few attempts to use experimental methods to study international law. This should change. In this paper we present the first argument that the study of international law could uniquely benefit from the use of experimental research methods. To make this argument, we present data we have collected that illustrates why observational studies will often be unable to provide answers to many of the most important questions to legal scholars. After doing so, we provide guidance on how laboratory, survey, and field experiments can be used by legal scholars to research international law."
Lawprofs arerushingto thedefenseof lawreviews after Adam Liptak's article in the New York Times. I won't rehash the various positions here. I'm pretty sure that a fair bit of this reaction is motivated by a mix of turf-protection and self-(re)validation: it's hard to hear that the esteemed, highly-selective publications in which you made your professional career are terrible.
But they are.*
Liptak gets some of the reasons for their terribleness right, but misses a few, and includes some extraneous things as well. For example, the fact that law reviews are not generally cited by courts or read by practitioners is -- in my opinion -- immaterial. But here are five reasons why they are, in fact, terrible.
1. Carpet-bomb submissions. If you're unaware of how this works: ExpressO. One submits one's paper to literally hundreds of journals at the same time. In contrast, scientific fields are single-submission, but that's not all. Philosophy journals do single submissions. History journals do it. PMLA does it. Even the fiction-publishing industry doesn't condone this behavior to the same extent, in part because it has some seriously negative consequences. More on that below.
2. Publication "review." So, thanks to ExpressO and its ilk, every half-credible law journal receives hundreds, and sometimes thousands, of submissions each submission season. (The existence of "submission seasons," as opposed to rolling submissions, is also terrible, but not sufficiently so to be worth its own bullet point.) At most reasonably prestigious schools, publication review goes something like this:
A. "Is this person a current or former federal judge, a current or former Attorney or Solicitor General, at (a top-20-ish) law school, at (our law school), or otherwise someone I -- a third-year law student -- have heard of? Yes? Go to C. No? Go to B."
B. "Have they indicated that they are expediting (see below) because of an acceptance at a lower-ranked journal? Yes? Go to C. No? Reject."
C. "Is the work interesting and (in my opinion) compelling? Yes? Publish if there's space. No? Reject."
The result gets published, unless it gets expedited up to a higher-ranked journal (again, see below).
Note how late actual quality -- even in the judgment of the 3Ls in charge -- appears in the process. In a credible system, quality would play a larger role than the author's name/title/affiliation, or the people cited in the first footnote. And of course, all this misses the fact that "quality" is itself determined not only by the editors of the journal -- something that occurs only at the margin in most disciplines -- but by third-year law students.
3. Expediting. For the uninitiated: Once one's paper is accepted at a law journal, the standard practice is to "expedite" review by notifying all journals in which you'd prefer your paper to be published over the one that has accepted it (generally but not always, those that are more highly ranked) to get them to make a fast decision on publication. If the higher-ranked journal decides to accept the paper, the process repeats at subsequently higher-ranked journals, until no one higher in the "food chain" agrees to publish the paper.
This is, in many respects, a form of tournament, and one of these days I or someone with better game theory skills than I will get around to analyzing this process from a theoretical perspective. In the meantime, note two things: First, law review editors use the information in the expedite process as an informational shortcut; among other things, that fact gives authors who are good at expediting their work an advantage that is -- arguably -- independent of the quality of the work in question.
Second, the combination of multiple submissions and the expedite process means that the overall publication process is wildly inefficient, with vastly larger numbers of people redundantly (albeit cursorally) reviewing the quality of scholarly work each period. By itself, inefficiency may or may not be something to worry about; and here, it might even be a good thing if student editors get valuable experience from the review process. But another down-side is that the process prevents a move to anything else, because -- particularly with multiple submissions -- peer review would be impossible. (If you don't understand why, imagine being one of two experts on 3rd Amendment law when the "other" expert submits a manuscript to 150 peer-reviewed law reviews simultaneously; within hours or days, you'd get 150 requests to review the same manuscript. Even I don't do 150 manuscript reviews a year.)
In the comments to his Concurring Opinions post, Dan Solove is nice enough to admit that he is -- and suggest that law professors as a class are -- insufficiently interested in the quality of the work in their discipline to invest their time into pre-publication peer review, as scholars in literally every other academic discipline do as a matter of professional responsibility. Assuming he's right -- and, he's the John Marshall Harlan Research Professor at GWU, so why wouldn't he be? -- I would suggest that either (i) law professors are, as a group, insufficiently imbued with a sense of professional obligation, or (b) law professors are aware of the deluge of reviewing that would come with a wholesale move to peer review under multiple submissions, and recognize that it would constitute an impossible burden on the professoriate in that field. I'll leave it to the reader to decide which is more likely to be the case, but do go read Professor Solove's comments before making the call.
4. "Editing." You're 25. You're overworked. You're crippled by the Blue Book (see below). And yet, if the author disagrees with your interpretation about when a comma is or isn't Oxford, they don't get published in your journal. No thanks.
5. The Blue Book. It's a terrible style, seemingly designed for a long-gone era. Something as simple as knowing what citation a particular passage refers to typically sets off a scroll-fest back to at least one previous footnote, thanks to the term every reader dreads: Ibid. As a literary decision, it's fine, but law reviews aren't literature; parentheticals or cites to numbered references would be far superior.
Finally: One sad consequence of all this terribleness is that law professors and law schools are not taken as seriously by other members of the university community as they might be. To take but one example, I have observed open, unbelieving derision from psychologists, engineers, chemists, business profs, and the like when I explain that yes, the members of the law school think that five or six non-peer-reviewed, student-edited papers constitutes a tenureable record at a major research university. (Throw in the fact that lawprofs have three years of post-undergraduate education, have typically generated zero grant dollars or patents, and taught precious few courses, and you can see why law schools are often the poor relations of the university community).
When I was a new faculty member doing work on law and courts, a senior member of my (political science) department who also did work on law and courts told me "If you publish an article in a law review -- even a very good one -- not only will it not count toward tenure, but we'll take it as a sign of your stupidity." As law reviews are currently constituted, that strikes me as good position to take.
* And before you go looking: Yes, I have published two or three pieces in law reviews. In every instance, I'd already been tenured and promoted (and so was close to indifferent about the matter), and my coauthors all had compelling professional reasons for wanting their work to appear in a law journal.
Those seeking to become law professors (particularly those heading to the AALS Recruitment Conference this weekend) will benefit from a recent paper by Tracey George (Vanderbilt) and Albert Yoon (Toronto). In The Labor Market for New Law Professors, George and Yoon assess the market from an empirical perspective and carefully consider an array of variables. While the paper assesses data from a single academic year (2007-08)--and important market changes in law faculty hiring may be underway this year--as well as survey data, it is easily the most comprehensive and current data-driven assessment of law faculty hiring. The abstract follows.
"Law school professors control the production of lawyers and influence
the evolution of law. Understanding who is hired as a tenure-track law
professor is of clear importance to debates about the state of legal education
in the United States. But while opinions abound on the law school hiring
process, little is empirically known about what explains success in the market
for law professors. Using a unique and extensive data set of survey responses
from candidates in the 2007-2008 legal academic labor market, we examine the
factors that influence which candidates are interviewed and ultimately hired by
law schools. We find that law schools appear open to non-traditional candidates
in the early phases of the hiring process but when it comes to the ultimate decision
— hiring — they focus on candidates who look like current law professors."
Debates about law school rankings and notions of "hierarchy" typically generate more ink than insight. A recent paper by Olufunmilayo Arewa (UC-Irvine), Andrew P. Morriss (Alabama), and William D. Henderson (Indiana), Enduring Hierarchies in American Legal Education, however, is one notable exception. The paper draws on a rich and diverse array of data sets, some of which span decades. (By sheer happenstance, the paper's circulation coincides with the distribution of US News ballots for its annual (2014) rankings.) Equally important, this paper contributes to a foundation for future empirical work on legal education and law schools. The abstract follows.
much attention has been paid to U.S. News & World Report’s rankings of U.S.
law schools, the hierarchy it describes is a long-standing one rather than a
recent innovation. In this Article, we show the presence of a consistent
hierarchy of U.S. law schools from the 1930s to the present, provide a categorization
of law schools for use in research on trends in legal education, and examine
the impact of U.S. News’s introduction of a national, ordinal ranking on this
established hierarchy. The Article examines the impact of such hierarchies for
a range of decision-making in law school contexts, including the role of
hierarchies in promotion, tenure, publication, and admissions, for employers in
hiring, and for prospective law students in choosing a law school. This Article
concludes with suggestions for ways the legal academy can move beyond existing
hierarchies and at the same time address issues of pressing concern in the
legal education sector. Finally, the Article provides a categorization of law
schools across time that can serve as a basis for future empirical work on
trends in legal education and scholarship."
Drawing from an array of data sources and focusing on civil jury awards, Bert Kritzer (Minn.), Guangya Liu (Duke), and Neil Vidmar (Duke), in An Exploration of 'Non-Economic' Damages in Civil Jury Awards, explore relations between economic and non-economic damages, as well the degree to which the ratio of non-economic-to-economic damages is informed by the maginitude of economic damages. A summary of their key findings follows.
"We found a mixture of consistent and inconsistent patterns across our various datasets. One fairly consistent pattern was the tendency for the ratio of non-economic to economic damages to decline as the amount of economic damages increased. Moreover, the variability of the ratio also tended to decline as the amount of economic damages increased. We found less consistency in our simple regression models where we predicted the log of non-economic damages from the log of economic damages."
John Pfaff (Fordham) continues his mass incarceration series over at PrawfsBlawg with this post on his explanation for prison growth in the U.S. According to John, the “'Standard Story' of prison growth given by academics,
policymakers, and the press alike [emphasizing the war on drugs], is basically broken, giving lots of
attention to factors that don’t matter that much, and overlooking (if not
actively downplaying) the ones that do." Instead, after noting that "prison
populations continue to rise even as violent and property crime decline and
plateau," John identifies prosecutors as the "primary engine of prison growth, at least since crime began its decline in the early
The scholarly peer review process is a human process and thus, by definition, is far from perfect. "Type 1 and 2" errors abound, along with good-faith differences of opinion. Some truly outstanding papers emerge in less-prestigious journals and first class journals sometimes publish truly flawed papers. As Andrew Gelman's (Columbia--Statistics) post helpfully reminds us, however, the "peer review" process hardly ends with publication, especially for empirical work. Indeed, publication can often simply mark the beginning of another, enduring round of "peer reviews," for the better and worse, for journals and authors.
As a co-editor I note with pride that JELS 10:3 maintains (for more than one decade) JELS' perfect record of on-time publication and includes a wonderful collection of diverse and interesting papers. Topics in this issue range from data on the Indian Supreme Court's workload (here) to multidistrict litigation transfers and consolidations (here).