The scholarly peer review process is a human process and thus, by definition, is far from perfect. "Type 1 and 2" errors abound, along with good-faith differences of opinion. Some truly outstanding papers emerge in less-prestigious journals and first class journals sometimes publish truly flawed papers. As Andrew Gelman's (Columbia--Statistics) post helpfully reminds us, however, the "peer review" process hardly ends with publication, especially for empirical work. Indeed, publication can often simply mark the beginning of another, enduring round of "peer reviews," for the better and worse, for journals and authors.
As a co-editor I note with pride that JELS 10:3 maintains (for more than one decade) JELS' perfect record of on-time publication and includes a wonderful collection of diverse and interesting papers. Topics in this issue range from data on the Indian Supreme Court's workload (here) to multidistrict litigation transfers and consolidations (here).
A series of posts (one dating back to 2004) by Andrew Gelman (Columbia--Statistics), including links to helpful papers, provides a nice overview of the array of potential errors flowing from under-powered papers. A refresher on the nomenclature (here) sets forth the basics.
"A Type 1 error is commtted if we reject the null hypothesis when it is true. A Type 2 error is committed if we accept the null hypothesis when it is false. A Type S error is an error of sign. A Type M error is an error of magnitude."
Later posts (here), along with relevant papers (here and here), develop the issue further. Insofar as the number of papers I see that neglect reporting results from basic power tests remains far too large, these links might interest some readers.
Paul Collins, Jr. (Univ. N. Texas--Poli Sci) notes the public availability of the U.S. Supreme Court Confirmation Hearings Database that he and co-author Lori Ringhand used in their recent book, Supreme Court Confirmation Hearings and
Constitutional Change (Cambridge Press, 2013).
"This database provides a wealth of information regarding the confirmation
hearings of U.S. Supreme Court nominees held before the Senate Judiciary
Committee. Based on confirmation hearing dialogue, the dataset includes
information on the political environment surrounding the nomination, the issue
and subissue areas being discussed, and the manner in which the nominees answer
senators' questions. In addition, the database contains information on the
discussion of judicial decisions at the hearings, including the name of the
decisions and the courts that rendered the cases debated at the hearings."
visit the site (here) will note that it includes a growing
compilation of data sets--most germane to federal court judicial decisionmaking--linked
to selected publications that exploit the data. Also, relevant Stata do files accompany many of the data sets. Not only does this site help disseminate useful data, but it also facilitates replication efforts.
Two final notes. First, when JELS editors (myself included) or JELS referees request data and do files from authors incident to manuscript reviews, Paul's web site includes examples of "best practices" that should be widely emulated. Second, graduate and law students seeking to learn empirical methods will find these "ready-to-use" data sets invaluable.
Many folks, especially those who conduct psychology experiments, will want to consider carefully Dan Kahan's (Yale) recent post about problems with relying on Mechanical Turk for data. More specifically, Kahan discusses "the invalidity of studies that use samples of Mechanical Turk workers to
test hypotheses about cognition and political conflict over societal
risks and other policy-relevant facts."
According to Dan, the "three decisive 'sample validity' problems" include: selection bias, prior, repeated exposure to study measures, and subjects' nationality misrepresentation. These problems, according to Dan, render Mechanical Turk samples particularly problematic for studies of culturally or
ideologically grounded forms of “motivated reasoning.”
I am delighted to note that Stanford Law Review's current issue (65:6) focuses exclusively on empirical legal studies. The issue features essays from six Stanford faculty members who used papers presented at last year's CELS (hosted by Stanford Law School) as a starting point to discuss the state of empirical work in an array of substantive legal sub-fields. The introductory essay, The Empirical Revolution in Law, by Prof. Dan Ho (Stanford) and Stanford Law's former Dean, Larry Kramer, provides a nice frame and overview. I recommend the entire issue to all.
I should also note that the 2013 CELS, hosted by Penn Law School, is scheduled for October 25-26, 2013. Those hoping to present papers or posters need to submit by no later than midnight (EST) on Friday, July 12, 2013.
The trend is palpable. As Prof. Witmer-Rich (Cleveland State) explains in this post, a Delayed Notice
Search Warrant (aka: "Sneak and Peek searches" and "Black Bag jobs") involves "the police conduct[ing] a covert search of a home
or business when the occupant is away. Sometime
later, they give the occupant notice of the search—maybe days, weeks or months
(today, 90 days is most common)." Those interested in this issue should pull Witmer-Rich's forthcoming paper, The Rapid Rise of
“Sneak and Peak” Searches, and the Fourth Amendment “Rule Requiring Notice."
While Gelman (Columbia--Statistics) (and Liebowitz) approaches the issue of trends in co-authorship from the perspective of economists in this post, the cost-benefit issues incident to co-authorship carry over to legal scholars as well. While the norm in legal scholarship tilts away from co-authorship, this norm is evolving in real-time. Indeed, the increase in empirical legal scholarship contributes to this evolution. On the co-authorship question, Gelman is comparatively bullish: "I [Gelman] have a different perspective in that I think even a small
collaboration by a coauthor can make an article or book much stronger.
Given that this seems to hold in statistics, where we publish dozens of
papers a year, I’d think it would be even more the case in economics,
where researchers take years to publish a single article."
legal scholarship has become something of a sport for many, including federal
judges. Chief Justice Roberts, for example, recently opined that "because
law review articles are not of interest to the bench," he has trouble
remembering the last law review article he read.
David Schwartz (Chicago-Kent) and Lee Petherbridge (Loyola-LA) subject the general claim to data. In a series of papers the authors present findings on when an opinion (majority, dissent,
or concurrence) cites to legal scholarship in the U.S. Supreme Court, Courts
of Appeals, and Federal Circuit. For Supreme Court opinions, the authors find that legal scholarship citations "sharply vary across different
types of legal issues." Click here for a quick summary of the papers (and the data sets).
A recent post by David Schwartz (Chicago-Kent)--wondering whether empirical legal scholars should shoulder "special ethical responsibilities"--ignited a fascinating (and timely) discussion over at Concurring Opinions. Two reasons prompt Schwartz's concerns. "First, nearly all law
reviews lack formal peer review. The lack of peer review potentially
permits dubious data to be reported without differentiation alongside
quality data. Second, empirical legal scholarship has the potential to
be extremely influential on policy debates because it provides 'data' to
substantiate or refute claims. Unfortunately, many
consumers of empirical legal scholarship — including other legal
scholars, practitioners, judges, the media, and policy makers — are not
sophisticated in empirical methods."
Schwartz's concern focuses on what he calls "weak data." By that he means "reporting [results from] data that encourages weak or flawed inferences, that is not
statistically significant, or that is of extremely limited value and
thus may be misused." Specifically, "[t]he precise question I
have been considering is under what circumstances one should report weak
data, even with an appropriate explanation of the methodology used and
its potential limitations."
Whether you agree with Schwartz or not, he raises an important question that warrants attention.
Responding to the recent debacle involving a grad student uncovering a blundering error in a paper by noted Harvard economists (here), Betsey Stevenson (Mich.) and Justin Wolfers (Mich.) initiated a (now growing) list of suggestions on how to minimize errors in empirical research. Not surprisingly, others, including Andrew Gelman (Columbia), have added to the list (here). While the list will inevitably grow, it already includes basic, helpful reminders for even the most experienced researchers.
A recent news story underscores the importance of basic replication (as well as scholarly attention to detail) for empiricists.
"His [Thomas Herndon's] professors at the University of
Massachusetts-Amherst had set his graduate class an assignment--pick an
economics paper and see if you can replicate the results. It's a good
exercise for aspiring researchers. Thomas chose Growth in a Time of Debt. It was getting a lot of
attention, but intuitively, he says, he was dubious about its findings."
Turns out that the grad student's intuition was dead-on as core results from the influential economics article--authored by two leading Harvard economists--could not be replicated. Herndon's replication efforts uncovered a basic
error in the spreadsheet. "The Harvard professors had accidentally only
included 15 of the 20 countries under analysis in their key calculation
(of average GDP growth in countries with high public debt). Australia, Austria, Belgium, Canada and Denmark were missing." In addition, other data for some countries were missing
While questions about who owns judges' official working papers implicate legal historians most directly, such (admittedly complex) questions should also interest empirical legal historians. In "Judges and Their Papers," Kathryn Watts (Washington) makes the case that judicial papers should be construed as public rather than private property. An excerpted abstract follows.
Article is the first to give significant attention to the question of who
should own federal judges’ working papers and what should happen to the papers
once a judge leaves the bench. Upon the 35th anniversary of the enactment of
the Presidential Records Act, this Article argues that judges’ working papers
should be treated as governmental property — just as presidential papers are.
Although there are important differences between the roles of President and
judge, none of the differences suggest that judicial papers should be treated
as a species of private property. Rather, the unique position of federal
judges, including the judiciary’s independence, should be taken into account
when crafting rules that speak to reasonable access to and disposition of
judicial papers — not when answering the threshold question of ownership.
Ultimately, this Article — giving renewed attention to a long forgotten 1977
governmental study commissioned by Congress — argues that Congress should
declare judicial papers public property and should empower the judiciary to
promulgate rules implementing the shift to public ownership. These would
include, for example, rules governing the timing of public release of judicial
papers. By involving the judiciary in implementing the shift to public
ownership, Congress would enhance the likelihood of judicial cooperation,
mitigate separation of powers concerns, and enable the judiciary to safeguard
judicial independence, collegiality and confidentiality."
As you've probably heard, the U.S. News 2014 Law School Rankings are out. Rather than offer commentary, I thought I'd piggyback on Paul Caron's useful post comparing the overall rankings with the peer reputation rankings. So here, for your edification, are the numbers Paul compiled in scatterplot form. (PDF)
Now that Cass Sunstein (Harvard) has departed the Obama Administration (and OIRA) and migrated back to academic life, in a recent paper published by the University of Chicago Law Review, Empirically Informed Regulation, Sunstein illustrates the central role data play (or, at least should play) in the development of regulations, with an emphasis on behavioral economics. The paper's abstract follows.
"In recent years, social scientists have been incorporating empirical findings about human behavior into economic models. These findings offer important insights for thinking about regulation and its likely consequences. They also offer some suggestions about the appropriate design of effective, low-cost, choice-preserving approaches to regulatory problems, including disclosure requirements, default rules, and simplification. A general lesson is that small, inexpensive policy initiatives can have large and highly beneficial effects. In the United States, a large number of recent practices and reforms reflect an appreciation of this lesson. They also reflect an understanding of the need to ensure that regulations have strong empirical foundations, both through careful analysis of costs and benefits in advance and through retrospective review of what works and what does not."