While not squarely "empirical," Associate Justice Antonin Scalia's passing today warrants note. Justice Scalia was a leading jurist who leaves a lasting legacy. We offer our condolences to the Scalia family.
Conference organizers Mathew McCubbins, Stuart Benjamin, and Guy-Uriel Charles announced that the 11th Annual Conference on Empirical Legal Studies, hosted by Duke University and the Society for Empirical Legal Studies (SELS), is set for November 18-19, 2016, at Duke Law School in Durham, NC. Due to growing and persistent demand, a one-day, hands-on empirical training workshop will once again be offered and is scheduled for Thursday, November 17, 2016. Additional information is forthcoming.
A few months ago a post requested readers' assistance with identifying examples of what Jim Greiner (Harvard) and Andrea Matthews, a recent HLS grad, have labelled "randomized control trials" in the U.S. legal profession. (The full post is here.) Jim now reports that their effort has resulted in a draft paper, now circulating on SSRN, entitled: Randomized Control Trials in the United States Legal Profession. The paper's abstract follows.
"We assemble studies within a set that we label "randomized control trials ('RCTs') in the United States legal profession," projects that essentially consist of field experiments conducted for the purpose of obtaining knowledge in which randomization replaces a decision that would otherwise have been made by a member of the United States legal profession. We use our assembly of approximately fifty studies to begin addressing the question of why the United States legal profession, in contrast to the United States medical profession, has resisted the use of the RCT as a knowledge-generating device."
While admittedly a slight departure--despite its timeliness--from traditional ELS Blog protocol, this post hopes to illustrate the "correlation v. causation" tension in a novel manner by drawing from, of all things, a recently-leaked Super Bowl 50 advertisement. The ad begins with an observation: "Data suggests 9 months after a Super Bowl victory, winning cities see a rise in births." The remaining wonderfully clever--and long--ad (here) then sets out to intentionally conflate correlation and causation. Regardless of one's interest level in football or Super Bowls, worth a look.
Slowly, but surely, the number of empirical research institutions nested in law schools continues to grow. One early example (established in 1998), UCLA Law School's Empirical Research Group, now seeks applications for a new Director. A brief description of the Group, and the Director position, follows.
"Director, Empirical Research Group (Req. 23141)
The Empirical Research Group was created in 1998 with the purpose to enhance the ability of UCLA law faculty to undertake and evaluate empirical research. ERG has become a model imitated to some degree by half-a-dozen other first-tier law schools around the United States.
The Director will continue and advance this empirical work at the law school. A strong candidate will have a PhD or equivalent in a field that emphasizes strong empirical research skills (e.g., economics, political science, policy, statistics, finance, or sociology). Candidates must be facile with current statistics, data analysis, and survey methods (e.g., regression, re-sampling, covariate balancing, etc.) and have experience using relevant statistical software (e.g., Stata, SAS, SPSS, and/or R). Computer programming ability sufficient to work with large data sets and to create stimulus materials for computer-based social science experiments is preferred. The University of California is an Equal Opportunity/Affirmative Action Employer advancing inclusive excellence."
Interested applicants should click here for important application information (search on job requisition number: 23141).
In a recent, wide-ranging and lengthy interview (which includes an embedded 36-minute interview video), Richard Nisbett (Mich--Psychology) explains his deep skepticism of multiple regression. Nisbett, a leading psychologist, makes his position crystal clear. For example: "A huge range of science projects are done with multiple regression analysis. The results are often somewhere between meaningless and quite damaging. ..." Also: "I hope that in the future, if I’m successful in communicating with people about this, that there’ll be a kind of upfront warning in New York Times articles: These data are based on multiple regression analysis. This would be a sign that you probably shouldn’t read the article because you’re quite likely to get non-information or misinformation."
The full interview implies that much of what bothers Nisbett, on a technical level anyway, involves omitted variable bias. To be sure, an important problem, but itself does not wholly discredit regression analysis altogether. Even if it misses the mark in (important) places and risks over-claiming, Nisbett's critique (or, in his words, "crusade") nonetheless warrants attention.
A recent PNAS article includes a figure illustrating “a marked increase in the all-cause mortality of middle-aged white non-Hispanic men and women in the United States between 1999 and 2013.” The data in the figure, however, were "not age-adjusted within the 10-y 45-54 age group.” Annual mortality rates were calculated by dividing the total number of deaths for the age group by the population of the age group. Also notable is that the NYT ran a story about this paper and prominently featured the (now misleading) figure (see below).
Enter Andrew Gelman (Coulmbia--Statistics) and Jonathan Auerbach (Columbia--Poli Sci) who correctly "suspected an aggregation bias and examined whether much of the increase in aggregate mortality rates for this age group could be due to the changing composition of the 45–54 year old age group over the 1990 to 2013 time period. If this were the case, the change in the group mortality rate over time may not reflect a change in age-specific mortality rates. Adjusting for age confirmed this suspicion. Contrary to Case and Deaton’s figure, we find there is no longer a steady increase in mortality rates for this age group. Instead there is an increasing trend from 1999–2005 and a constant trend thereafter. Moreover, stratifying age-adjusted mortality rates by sex shows a marked increase only for women and not men, contrary to the article’s headline" (emphasis added).
The important teaching take-away, as Gelman notes, is that "when performing reverse causal inference, remember that people move, and, as we’ve discussed before, the cohorts are changing. 45-54-year-olds in 1999 aren’t the same people as 45-54-year-olds in 2013. We adjust for changing age distributions (ya gotta do that) but we’re still talking about different cohorts."
Dave Schwartz (Northwestern) asked me to alter readers to the following and I am delighted to do so. Please note the Feb. 12 proposal deadline. Proposals (and any questions) should be directed to: CardozoIPIL@yu.edu
"We are pleased to announce the third annual Roundtable on Empirical Methods in Intellectual Property. Northwestern University Pritzker School of Law, Cardozo Law School, and the United States Patent & Trademark Office are negotiating an agreement to co-host the event. The roundtable will take place in Washington, DC at the USPTO on April 29-30, 2016.
The roundtable is intended to give scholars engaging in empirical and experimental studies of IP a chance to receive feedback on their work at an early stage in their research. Accordingly, the roundtable will be limited to a small cohort of scholars discussing projects that are still in their developmental stages. Projects that will have substantially begun data collection by the time of the roundtable are inappropriate. Pilot data collection is, however, appropriate.
The roundtable will be organized around a modest number of projects. Each project presenter will be expected to circulate a description of the project of no more than 10 pages by April 8. Each project will be assigned to an expert commenter and will be allotted 45 minutes of discussion by the attendees.
We welcome applications from scholars in the social sciences and law. Travel and lodging support for presenters will be provided.
Applications are due by February 12. We will notify applicants by March 1."
In the first of a promised 3-part series, Mitch Abdon summarizes RDD in a helpful (and brief) post (here). As the post makes clear:
"RDD is a quasi-experimental method for evaluating program impact when observation units (example, households) can be sorted using some continuous metric (example, income) and program assignment is based on a pre-determined threshold or cutoff point of the sorting metric. Observations just below the cutoff are deemed similar to, and therefore, compare well to those just above the cutoff. In the absence of the program, one would expect that any shifts in outcome variables would happen smoothly alongside minor changes in the running variable. Thus, a large jump in the outcome variable, observed precisely at the threshold value of the running variable, after program intervention can be attributed to the program itself."
Whether parallel lawsuits relying on state law but arising out of federal securities class action are merely duplicative or value-adding (for shareholders) continues to attract both theoretical and empirical attention. A recent contribution to the empirical literature includes a paper by Stephen Choi (NYU) et al., Piling On? An Empirical Study of Parallel Derivative Suits. The paper presents results from a sample of public companies named as defendants in securities actions between 2005 and 2008. A summary of the paper's findings follows.
"We find that while some parallel suits are filed first, potentially providing value in identifying wrongdoing, most are filed after a securities class action (termed “piggyback” parallel suits). Most piggyback parallel suits target cases involving obvious indicia of wrongdoing, indicating that parallel suits do not add value by devoting resources to cases where a securities class action may not already provide sufficient deterrent value. Although we do find that piggyback suits correlate with the targeting of individual officers not already named as defendants in the securities class action, this expansion of defendants is not positively correlated with an increase in settlement incidence, monetary recovery amounts, or attorney fees. One possible way a parallel suit may add value is by imposing different forms of sanctions for wrongdoing. We find that piggyback parallel suits often result in non-monetary, corporate governance settlements, particularly for frequent filing plaintiffs’ attorneys filing a piggyback parallel suit. Corporate governance settlements correlate with significantly lower attorney hours and attorney fees for the plaintiffs’ attorneys. We conclude that such settlements do not represent the product of extensive work by attorneys but are instead used to justify fees in cases where there is no monetary recovery."
As the Supreme Court struggles, once again, with diversity and affirmative action issues in the undergraduate admissions context in Fisher II (click here for the ScotusBlog summary), James Phillips (JD/PhD. cand. -- UC Berkeley) boldly and carefully assesses questions concerning ideological diversity in legal academia. Exploiting data on faculties at the 16 highest-ranked law schools, in Why are There so Few Conservatives and Libertarians in Legal Academia? An Empirical Exploration of Three Hypotheses Phillips engages with three standard explanations for the relative dearth of conservative law professors. The paper's abstract follows.
"There are few conservatives and libertarians in legal academia. Why? Three explanations are usually provided: the Brainpower, Interest, and Greed Hypotheses. Alternatively, it could be because of Discrimination. This paper explores these possibilities by looking at citation and publication rates by law professors at the 16 highest-ranked law schools in the country. Using regression analysis, propensity score matching, propensity score reweighting, nearest neighbor matching, and coarsened exact matching, this paper finds that after taking into account traditional correlates of scholarly ability, conservative and libertarian law professors are cited more and publish more than their peers. The paper also finds that they tend to have more of the traditional qualifications required of law professors than their peers, with a few exceptions. This paper indicates that, at least in the schools sampled, conservative and libertarian law professors are not few in number because of a lack of scholarly ability or professional qualifications. Further, the patterns do not prove, but are consistent with, a story of discrimination. The downsides to having so few conservatives and libertarians in the legal academy are also briefly explored."
A new user-written Stata command, classtabi, generates helpful summary statistics for a standard 2x2 Chi-Square table where only summarized data are available. Values from a 2x2 table can be entered into classtabi to produce the additional classification statistics. To access classtabi, launch Stata and execute "findit" to locate and download the program.
classtabi stores the following in r(): Scalars r(P_corr) percent correctly classified r(P_p1) sensitivity r(P_n0) specificity r(P_p0) false-positive rate r(P_n1) false-negative rate r(P_1p) positive predictive value r(P_0n) negative predictive value r(roc) ROC curve r(ess) effect strength for sensitivity
While evident to most property folks, NPO activity on the conservation easement front during the past few decades has revolutionized land preservation norms. Recently, judges and policymakers have considered whether and, if so, how to modify conservation easements. A paucity of helpful data precluded meaningful analysis and, as a consequence, public debate typically generated more "heat than light." An Empirical Study of Modification and Termination of Conservation Easements: What the Data Suggest About Appropriate Legal Rules, by Gerry Korngold (NY Law) et al., brings much-needed and helpful data to this question. An excerpted abstract follows.
"This article provides and analyzes a previously uncollected dataset that offers guidance on the appropriate rules of law for conservation easement modification. It examines policy goals in light of the data to suggest various modification rules that would be more effective than current practice. The dataset represents a significant sample of easement modifications that have been made during a six year period (2008-2013) and indicates several findings: first, modifications have actually been taking place, despite claims that conservation easements are “perpetual,” apparently indicating that NPOs need flexibility in at least some areas; most of the changes have been “minor” and have been either conservation neutral or conservation positive, though one would expect pressure for more significant alterations over time due to shifts in the environment and human needs; there is a range of types and degree of modifications to this point, suggesting that there should be a spectrum of procedural and substantive requirements for the different varieties of modifications; and, a mandate for a stand-alone, state registry of conservation easements and modifications would allow for improved policymaking.
The article suggests that a doctrine that requires different procedures and substantive rules for various categories of modifications — a sliding scale — may yield the best, policy-based results. The work also identifies and analyzes existing doctrines — federal tax law, specific state statutes, charitable trust doctrine, standing rules, and director liability — that would need to be altered or clarified to adopt effective modification rules."
The University of Arizona College of Law is hosting its third-annual Quantlaw workshop, scheduled for February 12-13, 2016. The theme for this year's workshop in The Empirical Constitution, and the workshop's lead organizer, Chris Robertson (Arizona), invites paper proposals. The organizers describe this year's theme to include: "any empirical/experimental study of issues relevant to constitutional law or procedure, as well as non-empirical studies of constitutional doctrine relevant to data and public access thereto. The program will include works-in-progress sessions for extensive feedback, as well as methods sessions focusing on datasets and analytical tools."
Also noteworthy is that travel stipends are available for "junior scholars" accepted into the workshop ("junior scholars" are defined to include: "those not yet in tenure-track positions, or within five years of appointment.")
More information for those interested is found here. The deadline for submission is January 6, 2016. Specific questions and proposals should be addressed to Prof. Chris Robertson at: firstname.lastname@example.org
Given the popularity of a handful of statistical platforms (e.g., Stata, SPSS, R, SAS, etc.), it is sometimes helpful to import code written in one platform into another. A recent post on the Stata Forum list makes just such an inquiry. The first response (from Prof. Richard Williams, Notre Dame, Sociology), emphasizes GNU PSPP software which has the virtue of being free. For those willing to spend $179 (or only $49 for a two-year student license), however, I've used StatTransfer for years. And their are certainly other products on the market as well. Regardless of the solution that one selects, existing products make migrating among various statistical platforms seamless.