Once again, and per custom, over at Concurring Opinions Dave Hoffman (Temple) blogs about his reflections on last weekend's CELS. Well worth a read, particularly for those unable to make the trip to Berkeley (as well as for the many who did attend).
One key assumption typically shared by many assessing federal circuit court decisions is that the three-judge panels that hear cases have been randomly configured. Indeed, scores of scholarly articles have noted this 'fact,' and it has been relied on heavily by empirical researchers.
How circuit panels are configured is no longer merely an academic question. Adam Liptak, in today's New York Times, reports on a legal challenge pivoting on how the Ninth Circuit assembled its panels to hear important same-sex appeals.
While not necessarily squarely on point to all ELS Blog readers, a general discussion of social science ethics, re-ignited recently by an incident/study involving political scientists at Stanford and Dartmouth (for a description, click here) remains germane to many ELS scholars as well. In a recent post Andrew Gelman (Columbia--statistics) discusses suggestions by Macartan Humphreys (Columbia--Poli Sci) on how to think through the ethical dimensions incident to social science research in the field.
My Cornell colleague and leading constitutional law scholar, Mike Dorf, has an interesting and provocative post (here) that speaks to the array (and growing number) of state quarantine measures responding to the Ebola crisis.
The ELS angle, of course, is Mike's point (drawn from CDC data) that: "the log(viral load) just before symptoms develop is 4.6. A day later, the log(viral load) is 7.2. Thus, (assuming linearity to first order) 12 hours after symptoms develop, the log(viral load) is 5.8. That's a change of 1.2 in log(viral load), meaning that the change in viral load more than triples (because e to the 1.2 power is 3.32.)."
In a recent post over at Concurring Opinions, Harry Surden (Colorado) concludes with the following prediction: "In the not too distant future, such data-driven approaches to engaging in legal prediction are likely to become more common within law. Outside of law, data analytics and machine-learning have been transforming industries ranging from medicine to finance, and it is unlikely that law will remain as comparatively untouched by such sweeping changes as it remains today."
If Surden is even partially correct we should expect to see data increasingly pressed into the service of a more sophisticated legal outcomes "prediction" business. Of course, the Katz, Bommarito, and Blackman paper's claim (discussed in Surden's post) for a 70.9% successful prediction rate needs to be placed into some context. Specifically, as many law profs and appellate litigators instinctively already know, simply by predicting a reversal one can correctly predict the outcome of a Supreme Court case with approximately 56-73% accuracy (for an extended discussion, click here). While a 70.9% prediction rate is important, when it comes to Supreme Court cases the correct baseline is not a Priest-Klein 50%.
While not squarely in the typical ELS wheelhouse, the following excerpt just stopped me in my tracks.
"In her excellent book, Race to the Top, the journalist Elizabeth Green tells a story of a new hamburger that the A&W Restaurant chain introduced to the masses. Weighing 1/3 of a pound, it was meant to compete with McDonald’s quarter-pounder and was priced comparably. But the 'Third Pounder' failed miserably. Consultants were mystified until they realized many A&W customers believed that they were paying the same for less meat than they got at McDonald’s. Why? Because four is bigger than three, so wouldn’t ¼ be more than 1/3?"
To be sure, this degree of innumeracy is not typically present in law school classrooms (or I certainly hope not). That said, a general ambivalence (at best) or aversion (at worst) towards all things quantitative shapes the stream of students who self-select into law schools.
In an effort to guard against crowding out, in my reviews of recent scholarship I do my best to keep an eye out for examples of particularly strong student (grad or law) work. This effort unearthed a student law review note, Keep it Secret, Keep it Safe: An Empirical Analysis of the State Secrets Doctrine, by Daniel Cassman and forthcoming in the Stanford Law Review. While the analyses are mainly descriptive, the data set is both interesting and useful and lends itself to tests of the 9/11 attacks' impact on the courts' implementation of the underexplored state secrets doctrine. An excerpted abstract follows.
"State secrets doctrine provides both an evidentiary privilege and a categorical bar on certain litigation that implicates national security concerns. The United States government has invoked the state secrets doctrine to insulate certain programs, including rendition and surveillance operations, from oversight by the courts. Despite a surge of interest in state secrets doctrine after September 11, few scholars have employed statistical analysis to analyze courts’ treatment of the issue. This Note employs a new data set containing over 300 state secrets cases to explore state secrets jurisprudence. I find that the number of assertions of the state secrets doctrine since September 11 has increased dramatically. Even so, in cases to which the government is a party, the distribution of courts’ rulings on those assertions is virtually unchanged. In litigation between private parties, however, courts have mostly avoided ruling on state secrets issues since September 11."
Over at PrawfsBlawg, Harold Wasserman (FL) notes that the latest JOTWELL Courts Law essay, by Lee Epstein (Wash U--Poli Sci), reviews Black & Spriggs' The Citation and Depreciation of U.S. Supreme Court Precedent (10 JELS 325 (2013)). The Black & Spriggs paper "examines how the use of precedent changes and depreciates over time." As Epstein notes, and Black & Spriggs find, "Supreme Court precedents don’t have an especially long shelf life: they depreciate by about 80% between years one and twenty. Interestingly, though, much of the depreciation occurs within the first couple of years." Given these findings, Epstein suggests that "because of its 'here today, gone tomorrow' quality, law professors and lawyers might (re)consider carefully the cases they emphasize in class and in the courtroom."
Andrew Gelman (Columbia--Statistics) has a nice post (here) that underscores a common point: A general pull towards identifying "typical" responses can deflect researchers from a potentially more interesting story about variation. His second--but often related--point is that it is awfully difficult to overemphasize the need to simply "look" at the data.
As Gelman observes: "The resolution, I think, is that we have to avoid the tendency to think deterministically. There’s variation! As shown in the above histogram, some people reported thinking to be “not at all enjoyable,” some reported it to be “somewhat enjoyable,” and there were a lot of people in the middle. Given this, it’s not so helpful to make statements about what people “typically” enjoy (as in the abstract of the paper)."
The good folks at the Administrative Conference asked that I pass along the following information about a request for research proposals. For those who may not know, the Administrative Conference is a small federal agency that conducts applied research on (and on behalf of) federal agencies. The current request (here, and desceribed below) involves research on federal court review of social security disability decisions.
"The Administrative Conference seeks proposals for a comprehensive study of the Social Security Administration’s litigation in the federal courts involving social security disability claims. The study should provide an independent analysis of the role of courts in reviewing SSA disability decisions and consider measures that SSA could take to reduce the number of cases remanded to it by courts. It should also address significant observed variances among federal courts in decisional outcomes, case management and other procedures for social security cases, the timing of review, and judicial application of agency policies and procedures. Proposals are due by October 31, 2014 and should be submitted in conformance with the attached Request for Proposals to Stephanie Tatham, at: firstname.lastname@example.org"
Stephanie Tatham (Admin. Confr.), the contact person, notes that: "We really need a scholar who is comfortable with empirical research because for the last five years there have been more than 12,000 annual dispositions of social security cases in federal district courts. We are able to provide the consultant with access to disposition data from the Federal Court Cases: Integrated Data Base (unfortunately without judge information). We also will have data from the Social Security Administration on bases for judicial remand identified by their analysts. Given this data, it is an unprecedented research opportunity. Of course, some supplemental research will also be necessary."
As football attracts increased scrutiny on an array of fronts, D1 college football coach compensation is a popular target. Frequently controversial, particularly at public universities, appropriate compensation for coaches remains contested. In a recent paper, Are Football Coaches Overpaid? Evidence from Their Employment Contracts, professors (Randall Thomas & R. Lawrence Van Horn) from a significant college football power conference (Vanderbilt & the SEC) inject data and economic theory into this contest. The abstract follows.
"The commentators and the media pay particular attention to the compensation of high profile individuals. Whether these are corporate CEOs, or college football coaches, many critics question whether their levels of remuneration are appropriate. In contrast, corporate governance scholarship has asserted that as long as the compensation is tied to shareholder interests, it is the employment contract and incentives therein which should be the source of scrutiny, not the absolute level of pay itself. We employ this logic to study the compensation contracts of Division I FBS college football coaches during the period 2005-2013. Our analysis finds many commonalities between the structure and incentives of the employment contracts of CEOs and these football coaches. These contracts’ features are consistent with what economic theory would predict. As such we find no evidence that the structure of college football coach contracts is misaligned, or that they are overpaid."
Setting aside the heated politics of immigration, much of the policy debate surrounding immigration reform simply assumes that immigration enforcement reduces crime rates. This central assumption, however, does not benefit from much data. In their paper, Does Immigration Enforcement Reduce Crime? Evidence from 'Secure Communities', Tom Miles (Chicago) and Adam Cox (NYU), bring much-needed data to this research question, exploit a natural experiment created by the Secure Communities program, and "provide the first empirical analysis of the most important deportation initiative to be rolled out in decades." What they find is that the Secure Communities program has led to "no meaningful reductions in the FBI index crime rate. Nor has it reduced rates of violent crime — homicide, rape, robbery, or aggravated assault. This evidence shows that the program has not served its central objective of making communities safer."
Yanna Krupnikov (Stony Brook-Poli Sci) and Adam Seth Levine’s (Cornell-Govt) article, “Cross-Sample Comparisons and External Validity,” offers an interesting (and empirically welcome) contribution to the ongoing debate regarding the use of convenience samples drawn from MTurk. The takeaway, at least in part, is as the authors note in their conclusion: “Our results do serve to sound a note of caution when using MTurk to produce generalizable results for all but the simplest experimental designs.”