Dan Ho (Stanford) has put together an outstanding Program for this year's conference. Event's begin later this morning with Ted Eisenberg's (Cornell) 10-hour empirical training workshop. Insofar as the Workshop proved quite popular (registration closed quickly due to participant demand), workshops will be planned for future CELS. In addition, Ted is presenting a similar workshop at AALS in January.
A nice pair of posts from Andrew Gelman's (Columbia--Poli Sci, Statistics) blog (here) illustrates and briefly discusses the virtues of visually conveying uncertainty in regression results. The initial post illustrates two competing visual approaches. A follow-up post includes (or links to) helpful coding.
The Society for Empirical Legal Studies Executive Director, Dawn Chutkow, passed along the following that might interest those planning to attend the CELS at Stanford in November.
Ted Eisenberg (Cornell) will
conduct an Empirical Training Workshop on November 8 - 9, in connection with CELS
2012 at Stanford Law School. Enrollment is limited. The workshop begins the day
before the conference. A brief description and a link for more
Empirical Training Workshop is intended for professors and students who seek an
introduction to the statistical and programming skills needed to conduct
quantitative empirical legal research. Professor Eisenberg will guide
participants through an intensive 10-hour course on statistical analysis in the
legal context. Pre-registration and a small fee are required.
In 2005, the National Research Council (NRC) evaluated the “More Guns, Less Crime” hypothesis using county-level crime data for the period 1977-2000. 17 of the 18 NRC panel members essentially concluded that the existing research was inconclusive on whether "right-to-carry" laws increased or decreased crime.
"We evaluate the NRC evidence, and improve and expand on the report’s county data analysis by analyzing an additional six years of county data as well as state panel data for the period 1977-2006. We also present evidence using both a more plausible version of the Lott and Mustard specification, as well as our own preferred specification (which, unlike the Lott and Mustard model used in the NRC report, does control for rates of incarceration and police). While we have considerable sympathy with the NRC’s majority view about the difficulty of drawing conclusions from simple panel data models, we disagree with the NRC report’s judgment that cluster adjustments to correct for serial correlation are not needed. Our randomization tests show that without such adjustments the Type 1 error soars to 44-75 percent. In addition, the conclusion of the dissenting panel member that RTC laws reduce murder has no statistical support.
Our paper highlights some important questions to consider when using panel data methods to resolve questions of law and policy effectiveness. Although we agree with the NRC’s cautious conclusion regarding the effects of RTC laws, we buttress this conclusion by showing how sensitive the estimated impact of RTC laws is to different data periods, the use of state versus county data, particular specifications, and the decision to control for state trends. Overall, the most consistent, albeit not uniform, finding to emerge from both the state and county panel data models conducted over the entire 1977-2006 period with and without state trends and using three different specifications is that aggravated assault rises when RTC laws are adopted."
Interesting post (and related discussion) over at PrawfsBlawg involving Martin Pritikin's (Whittier) initial "jump" onto the ELS "bandwagon" and into ELS scholarship. Notable to me is Martin's willingness to undertake the "heroic" task of data creation (rather than secondary analysis of existing data sets). It is also hearting to hear reports from participants about the efficacy of the growing number of empirical legal studies workshops. Finally, the comments' helpfulness evidences how the ELS community stands ready to assist those willing to engage.
The folks over at The Stata Blog recently polled readers (obviously, a non-random selection of Stata users) on their favorite Stata command. While some may find the results (here) themselves interesting, others might find unfamiliar commands that could prove useful.
While not particularly legal per se, results from a test of professional violinists' ability to identify music from a Stradivarius as opposed to other and newer, expensive violins, originally published in The Strad (Feb. 2007), were featured in a recent NPR segment (here). (The NPR segment includes two audio clips for anyone interested and desiring to test their own musical acumen.)
Notably (and certainly within the ELS Blog sweet-spot), the researchers employed a double-blind test. "Researchers gathered professional violinists in a hotel room in Indianapolis. They had six violins — two Strads, a Guarneri and three modern instruments. Everybody wore dark goggles so they couldn't see which violin was which." Ironically, "the only statistically obvious trend in the choices was that one of the Stradivarius violins was the least favorite, and one of the modern instruments was slightly favored."
On the Stata listserv I recently stumbled across a resource (here) for anyone looking for a helpful "how-to" explanation of Stata's margins command and the resulting adjusted predictions and marginal effects output. Credit goes to Richard Williams, a sociologist at Notre Dame for sharing his slides.
Over the years I have repeatedly emphasized to my students in empirical methods classes of the need to "get underneath" the data and results, particularly for secondary analyses. By that I mean researchers invariably benefit from on-the-ground insights into and outside perspectives on what their data (and research design) actually capture and results suggest. Kyle Graham's helpful post over at Concurring Opinion illustrates how this general point can work when evaluating a possible empirical project.
While wanting to avoid igniting something of a statistical holy war, I did want to point to an argument from the good folks at Stata for using poisson over log-linear regression (and negative binomial regression) in some cases. The post (here) quite helpfully includes suggested code as well as Stata output illustrations.
We previously posted Dave Hoffman's (Temple) thoughts on potential problems flowing from an increasingly technical turn in much of the ELS literatures. Corey Yung (John Marshall) pushes back a bit here.
While Yung "completely agree(s) with his (Hoffman's) conclusion that empirical legal studies should seek to be more accessible (which I always note at the end of my introduction of my empirical work), I disagree with his contention that empirical legal studies are facing widespread incomprehensibility due to growing complexity." In fact, according to Yung, in some areas (e.g., judicial decisionmaking) current empirical work has "barely scratched the surface." That is, such work needs to become more technical to get at the dynamics with greater accuracy. Anyway, despite both posts nested in the admittedly trendy "Moneyball" and sabremetrics contexts, they engage with an interesting set of issues.
In his post over at Marginal Revolution Alex Tabarrok reminds us all why selection effects are important to consider in our work.
During WWII, statistician Abraham Wald was asked to help the British decide where to add armor to their bombers. After analyzing the records, he recommended adding more armor to the places where there was no damage.
The RAF was initially confused. Can you explain (without peeking below)?
Because Wald had data only on the planes that returned to Britain he concluded that the bullet holes that he saw were all in places where a plane could be hit and still survive. Wald then surmised that the planes that were shot down were probably hit in different places than those that returned. Thus, Wald recommended adding armor to the places where the surviving planes were lucky enough not to have been hit.
Dave Hoffman (Temple) has an interesting post at Concurring Opinions that considers whether ELS and sabremetrics are destined to suffer from similar futures. Dave worries that: "sabermetricians are devoting oodles of time to ever-more-complex formulae which add only a small amount of predictive power, but which make the discipline more remote from lay understanding, and thus less practically useful. Basically: the jargonification of a field." Although I hope Dave's worry is misplaced I understand his point. That said, perhaps awareness of a challenge ex ante can help reduce the probability of it presenting.
Although I tend to steer clear of such debates, over at Concurring Opinions David Fagundes (Southwestern) discusses the issue. Semantics aside, partly driving David's interesting post is his concern that:
“'Empirical' is not just a neutral term that happens to describe a particular methodology. It may be understood to connote, rightly or not, a certain degree of rigorousness and exactitude that can set it apart from, and perhaps even above, other methodologies."
While I understand David's point, I'm not sure I buy the premise underneath it. My own view (admittedly, only a datum) is that good scholarship is good scholarship, regardless of the methodological approach. And I am reasonably confident that my take on this is shared by others.