Lawprofs arerushingto thedefenseof lawreviews after Adam Liptak's article in the New York Times. I won't rehash the various positions here. I'm pretty sure that a fair bit of this reaction is motivated by a mix of turf-protection and self-(re)validation: it's hard to hear that the esteemed, highly-selective publications in which you made your professional career are terrible.
But they are.*
Liptak gets some of the reasons for their terribleness right, but misses a few, and includes some extraneous things as well. For example, the fact that law reviews are not generally cited by courts or read by practitioners is -- in my opinion -- immaterial. But here are five reasons why they are, in fact, terrible.
1. Carpet-bomb submissions. If you're unaware of how this works: ExpressO. One submits one's paper to literally hundreds of journals at the same time. In contrast, scientific fields are single-submission, but that's not all. Philosophy journals do single submissions. History journals do it. PMLA does it. Even the fiction-publishing industry doesn't condone this behavior to the same extent, in part because it has some seriously negative consequences. More on that below.
2. Publication "review." So, thanks to ExpressO and its ilk, every half-credible law journal receives hundreds, and sometimes thousands, of submissions each submission season. (The existence of "submission seasons," as opposed to rolling submissions, is also terrible, but not sufficiently so to be worth its own bullet point.) At most reasonably prestigious schools, publication review goes something like this:
A. "Is this person a current or former federal judge, a current or former Attorney or Solicitor General, at (a top-20-ish) law school, at (our law school), or otherwise someone I -- a third-year law student -- have heard of? Yes? Go to C. No? Go to B."
B. "Have they indicated that they are expediting (see below) because of an acceptance at a lower-ranked journal? Yes? Go to C. No? Reject."
C. "Is the work interesting and (in my opinion) compelling? Yes? Publish if there's space. No? Reject."
The result gets published, unless it gets expedited up to a higher-ranked journal (again, see below).
Note how late actual quality -- even in the judgment of the 3Ls in charge -- appears in the process. In a credible system, quality would play a larger role than the author's name/title/affiliation, or the people cited in the first footnote. And of course, all this misses the fact that "quality" is itself determined not only by the editors of the journal -- something that occurs only at the margin in most disciplines -- but by third-year law students.
3. Expediting. For the uninitiated: Once one's paper is accepted at a law journal, the standard practice is to "expedite" review by notifying all journals in which you'd prefer your paper to be published over the one that has accepted it (generally but not always, those that are more highly ranked) to get them to make a fast decision on publication. If the higher-ranked journal decides to accept the paper, the process repeats at subsequently higher-ranked journals, until no one higher in the "food chain" agrees to publish the paper.
This is, in many respects, a form of tournament, and one of these days I or someone with better game theory skills than I will get around to analyzing this process from a theoretical perspective. In the meantime, note two things: First, law review editors use the information in the expedite process as an informational shortcut; among other things, that fact gives authors who are good at expediting their work an advantage that is -- arguably -- independent of the quality of the work in question.
Second, the combination of multiple submissions and the expedite process means that the overall publication process is wildly inefficient, with vastly larger numbers of people redundantly (albeit cursorally) reviewing the quality of scholarly work each period. By itself, inefficiency may or may not be something to worry about; and here, it might even be a good thing if student editors get valuable experience from the review process. But another down-side is that the process prevents a move to anything else, because -- particularly with multiple submissions -- peer review would be impossible. (If you don't understand why, imagine being one of two experts on 3rd Amendment law when the "other" expert submits a manuscript to 150 peer-reviewed law reviews simultaneously; within hours or days, you'd get 150 requests to review the same manuscript. Even I don't do 150 manuscript reviews a year.)
In the comments to his Concurring Opinions post, Dan Solove is nice enough to admit that he is -- and suggest that law professors as a class are -- insufficiently interested in the quality of the work in their discipline to invest their time into pre-publication peer review, as scholars in literally every other academic discipline do as a matter of professional responsibility. Assuming he's right -- and, he's the John Marshall Harlan Research Professor at GWU, so why wouldn't he be? -- I would suggest that either (i) law professors are, as a group, insufficiently imbued with a sense of professional obligation, or (b) law professors are aware of the deluge of reviewing that would come with a wholesale move to peer review under multiple submissions, and recognize that it would constitute an impossible burden on the professoriate in that field. I'll leave it to the reader to decide which is more likely to be the case, but do go read Professor Solove's comments before making the call.
4. "Editing." You're 25. You're overworked. You're crippled by the Blue Book (see below). And yet, if the author disagrees with your interpretation about when a comma is or isn't Oxford, they don't get published in your journal. No thanks.
5. The Blue Book. It's a terrible style, seemingly designed for a long-gone era. Something as simple as knowing what citation a particular passage refers to typically sets off a scroll-fest back to at least one previous footnote, thanks to the term every reader dreads: Ibid. As a literary decision, it's fine, but law reviews aren't literature; parentheticals or cites to numbered references would be far superior.
Finally: One sad consequence of all this terribleness is that law professors and law schools are not taken as seriously by other members of the university community as they might be. To take but one example, I have observed open, unbelieving derision from psychologists, engineers, chemists, business profs, and the like when I explain that yes, the members of the law school think that five or six non-peer-reviewed, student-edited papers constitutes a tenureable record at a major research university. (Throw in the fact that lawprofs have three years of post-undergraduate education, have typically generated zero grant dollars or patents, and taught precious few courses, and you can see why law schools are often the poor relations of the university community).
When I was a new faculty member doing work on law and courts, a senior member of my (political science) department who also did work on law and courts told me "If you publish an article in a law review -- even a very good one -- not only will it not count toward tenure, but we'll take it as a sign of your stupidity." As law reviews are currently constituted, that strikes me as good position to take.
* And before you go looking: Yes, I have published two or three pieces in law reviews. In every instance, I'd already been tenured and promoted (and so was close to indifferent about the matter), and my coauthors all had compelling professional reasons for wanting their work to appear in a law journal.
Those seeking to become law professors (particularly those heading to the AALS Recruitment Conference this weekend) will benefit from a recent paper by Tracey George (Vanderbilt) and Albert Yoon (Toronto). In The Labor Market for New Law Professors, George and Yoon assess the market from an empirical perspective and carefully consider an array of variables. While the paper assesses data from a single academic year (2007-08)--and important market changes in law faculty hiring may be underway this year--as well as survey data, it is easily the most comprehensive and current data-driven assessment of law faculty hiring. The abstract follows.
"Law school professors control the production of lawyers and influence
the evolution of law. Understanding who is hired as a tenure-track law
professor is of clear importance to debates about the state of legal education
in the United States. But while opinions abound on the law school hiring
process, little is empirically known about what explains success in the market
for law professors. Using a unique and extensive data set of survey responses
from candidates in the 2007-2008 legal academic labor market, we examine the
factors that influence which candidates are interviewed and ultimately hired by
law schools. We find that law schools appear open to non-traditional candidates
in the early phases of the hiring process but when it comes to the ultimate decision
— hiring — they focus on candidates who look like current law professors."
Debates about law school rankings and notions of "hierarchy" typically generate more ink than insight. A recent paper by Olufunmilayo Arewa (UC-Irvine), Andrew P. Morriss (Alabama), and William D. Henderson (Indiana), Enduring Hierarchies in American Legal Education, however, is one notable exception. The paper draws on a rich and diverse array of data sets, some of which span decades. (By sheer happenstance, the paper's circulation coincides with the distribution of US News ballots for its annual (2014) rankings.) Equally important, this paper contributes to a foundation for future empirical work on legal education and law schools. The abstract follows.
much attention has been paid to U.S. News & World Report’s rankings of U.S.
law schools, the hierarchy it describes is a long-standing one rather than a
recent innovation. In this Article, we show the presence of a consistent
hierarchy of U.S. law schools from the 1930s to the present, provide a categorization
of law schools for use in research on trends in legal education, and examine
the impact of U.S. News’s introduction of a national, ordinal ranking on this
established hierarchy. The Article examines the impact of such hierarchies for
a range of decision-making in law school contexts, including the role of
hierarchies in promotion, tenure, publication, and admissions, for employers in
hiring, and for prospective law students in choosing a law school. This Article
concludes with suggestions for ways the legal academy can move beyond existing
hierarchies and at the same time address issues of pressing concern in the
legal education sector. Finally, the Article provides a categorization of law
schools across time that can serve as a basis for future empirical work on
trends in legal education and scholarship."
In a very brief (7 pp.) technical--though informative--paper circulating on SSRN, Empirical Studies of Copyright Litigation: Nature of Suit Coding, Matthew Sag (Loyola-Chicago) assesses variable coding reliability in a commonly-used database for legal scholars (PACER). Specifically, the paper focuses on "Nature of Suit" variable in the PACER records for empirical studies of
copyright litigation. While Sag finds that the variable does not, in fact, capture all
copyright cases, it nonetheless remains sufficient "for most purposes." Sag notes that the variable is especially suspect for copyright cases that involved pro se
litigants and where copyright was not the primary litigated issue. Sag estimates that the
"820 code" captures "80 to 85% of true copyright cases leading to written opinions."
Princeton's Program in Law and Public Affairs (LAPA) invites
"outstanding faculty members, independent scholars, lawyers, and judges to apply
for visiting, residential appointments for the academic year 2014–2015. Successful candidates will devote an academic year in residence at
Princeton engaging in their own research and in the intellectual life of the
2014-2015, we plan to name up to five general LAPA Fellows, plus one
LAPA/Perkins Fellowship in Law and Humanistic Inquiry for scholars at the early
stages of their careers. Applicants to the program will be considered for all
of the applicable fellowships, depending upon the applicant's proposed research
project and qualifications."
past LAPA Fellows selections tilt towards law and humanities, qualitative, and
comparative scholars, LAPA advertises that it includes an interest in, among
other areas, “law-related subjects of empirical … significance.”
For those interested, on-line applications can be found here. The application deadline is 5:00 PM (EST)
Monday, November 4, 2013.
A recent post on the Stata blog includes a quite helpful explication of effect size, and various measures of it. Effect size is important, in part, because results are often "assessed by statistical
significance, usually that the p-value is less than 0.05. P-values and
statistical significance, however, don’t tell us anything about
practical significance" (emphasis added). The following hypo (from the post) illustrates:
"What if I told you that I had developed a new weight-loss pill and
that the difference between the average weight loss for people who took
the pill and the those who took a placebo was statistically significant?
Would you buy my new pill? If you were overweight, you might reply, 'Of course! I’ll take two bottles and a large order of french fries to
go!' Now let me add that the average difference in weight loss was
only one pound over the year. Still interested? My results may be
statistically significant but they are not practically significant. Or what if I told you that the difference in weight loss was not
statistically significant — the p-value was 'only' 0.06 — but the
average difference over the year was 20 pounds? You might very well be
interested in that pill. The size of the effect tells us about the practical significance. P-values do not assess practical significance."
Finally, one more practical reason to attend to effect size is that a growing (albeit small) number of journals now require reporting it.
Drawing from an array of data sources and focusing on civil jury awards, Bert Kritzer (Minn.), Guangya Liu (Duke), and Neil Vidmar (Duke), in An Exploration of 'Non-Economic' Damages in Civil Jury Awards, explore relations between economic and non-economic damages, as well the degree to which the ratio of non-economic-to-economic damages is informed by the maginitude of economic damages. A summary of their key findings follows.
"We found a mixture of consistent and inconsistent patterns across our various datasets. One fairly consistent pattern was the tendency for the ratio of non-economic to economic damages to decline as the amount of economic damages increased. Moreover, the variability of the ratio also tended to decline as the amount of economic damages increased. We found less consistency in our simple regression models where we predicted the log of non-economic damages from the log of economic damages."
John Pfaff (Fordham) continues his mass incarceration series over at PrawfsBlawg with this post on his explanation for prison growth in the U.S. According to John, the “'Standard Story' of prison growth given by academics,
policymakers, and the press alike [emphasizing the war on drugs], is basically broken, giving lots of
attention to factors that don’t matter that much, and overlooking (if not
actively downplaying) the ones that do." Instead, after noting that "prison
populations continue to rise even as violent and property crime decline and
plateau," John identifies prosecutors as the "primary engine of prison growth, at least since crime began its decline in the early
Jeremy Blumenthal (Syracuse) asked that I share the following Call For Papers, and I'm delighted to do so.
upcoming American Psychology/Law Conference in New Orleans, next March 2014, is
particularly recruiting legal scholars’ work. All law-and-psychology-related
work is welcome, with a separate review process for non-empirical legal work
that relates to psychology. An abstract and a 1,000-word summary for
individual papers, or a panel of papers with abstracts, is required.
Instructions and further conference information are here. And
the login page to submit papers is here.
The submission deadline is Sept. 30. Those with any questions/comments should contact Jeremy directly at: email@example.com
The scholarly peer review process is a human process and thus, by definition, is far from perfect. "Type 1 and 2" errors abound, along with good-faith differences of opinion. Some truly outstanding papers emerge in less-prestigious journals and first class journals sometimes publish truly flawed papers. As Andrew Gelman's (Columbia--Statistics) post helpfully reminds us, however, the "peer review" process hardly ends with publication, especially for empirical work. Indeed, publication can often simply mark the beginning of another, enduring round of "peer reviews," for the better and worse, for journals and authors.
As a co-editor I note with pride that JELS 10:3 maintains (for more than one decade) JELS' perfect record of on-time publication and includes a wonderful collection of diverse and interesting papers. Topics in this issue range from data on the Indian Supreme Court's workload (here) to multidistrict litigation transfers and consolidations (here).
Prompted by my prior post discussing statistical significance levels, my Cornell colleague Ted Eisenberg passed along this 1982 paper from the American Psychologist by Michael Cowles (York) and Caroline Davis (York) discussing the emergence of the p < 0.05 threshold as the "standard" in the social sciences. Cowles and Davis argue that the move to the p < 0.05 threshold pre-dates Sir Ronald Fisher's contribution.