George Mason University School of Law's Law and Economics Center is hosting a Workshop on Empirical and Experimental Methods for Law Professors on May 23-26, 2011, in Arlington, VA. Interestingly, this is the second entrant into a growing market for programs specifically addressing law professors and empirical legal scholarship. (We've previously noted similar programs jointly sponsored by Wash U/Northwestern.) A more detailed description of the GMU Workshop, as well as contact information, follow.
"The Workshop on Empirical and Experimental Methods for Law Professors is designed to teach law professors the conceptual and practical skills required to (1) understand and evaluate others’ empirical studies, and (2) design and implement their own empirical studies. Participants are not expected to have background in statistical knowledge or empirical skills prior to enrollment. Instructors have been selected in part to demonstrate the development of empirical studies in a wide-range of legal and institutional settings including: antitrust, business law, bankruptcy, class actions, contracts, criminal law and sentencing, federalism, finance, intellectual property, and securities regulation. Class sessions will provide participants opportunities to learn through faculty lectures, drawing upon data and examples for cutting edge empirical legal studies, and participating in experiments. There will be numerous opportunities for participants to discuss their own works-in-progress or project ideas with the instructors.
The Workshop will begin on Monday May 23, at 8:30 a.m. and conclude on Thursday May 26, at 12 pm. Classes on May 23, 24 and 25 will run from 8:30 am to 5pm, and include lectures, group sessions, and opportunities for participants to present their own empirical projects or “works in progress.”
Tuition for the Workshop on Empirical and Experimental Methods is $850 for the first professor from a law school and $500 for additional registrants from the same school.
Those interested in the Workshop should contact Jeff Smith directly at:
As the use of instrumental variables has become almost common in many literatures, especially economics and, increasingly, political science, it would assist many to get a better feel for the work they can perform (when used correctly and prudently). In this spirit, a recent paper in the American Journal of Political Science, Instrumental Variables Estimation in Political Science: A Readers' Guide, by Allison Sovey (Yale) and Donald Greeen (Yale), provides one helpful (and generally accessible) discussion. More specifically, the paper discusses "two noteworthy applications of instrumental variables regression, calling attention to the statistical assumptions that each invokes. The concluding section proposes reporting standards and provides a checklist for readers to consider as they evaluate applications of this method."
A student recently brought this paper to my attention. The abstract follows.
"The quantitative and qualitative research traditions can be thought of as distinct cultures marked by different values, beliefs, and norms. In this essay, we adopt this metaphor toward the end of contrasting these research traditions across 10 areas: (1) approaches to explanation, (2) conceptions of causation, (3) multivariate explanations, (4) equifinality, (5) scope and causal generalization, (6) case selection, (7) weighting observations, (8) substantively important cases, (9) lack of fit, and (10) concepts and measurement. We suggest that an appreciation of the alternative assumptions and goals of the traditions can help scholars avoid misunderstandings and contribute to more productive ‘‘cross-cultural’’ communication in political science."
A while back I linked to Brian Leiter's curious thoughts about ELS and the production of "too much" (and "too much" mediocre) empirical legal scholarship. Somewhat predictably, the post triggered commentary (as Brian's post was likely designed to do). To keep ELS Blog readers current, I want to update on even more recent commentary from Dave Hoffman (here) and Josh Wright (here).
The good folks over at the Social Science Statistics Blog posted a recent presentation from the Applied Statistics workshop at Harvard. In it, Prof. Cassandra Wolos Pattanayak discusses propensity score matching at the CDC (approx. time: 75 minutes).
Over at PrawfsBlawg Dan Markel (Fl. St.), with tongue planted firmly in cheek (presumably, hopefully), identifies an interesting research opportunity (here) for empiricists in general and T&E scholars in particular. Gallows humor aside, Dan's post backs into an important point--the relative paucity of well-structured natural experimental research designs for legal scholars.
The National Science Foundation (SES-0921008) is funding training for
full, associate, and assistant professors to attend a 5-day workshop
offered by the Institute for Behavioral Genetics. This is a hands-on
methods training course, not a lecture or seminar. Every participant
will have a laptop provided to share with one other participant, take
part in data analyses, develop structural models, and run various
statistical packages among many other exercises. A detailed
description of the course can be found at
(Note that this page contains the schedule for the last introductory
workshop in 2008 -- the final schedule for the 2010 workshop will be
posted in the coming weeks.)
Attendees will leave the workshop ready to take part in behavior
genetic analyses, and have access to scripts, statistical packages, and
training materials to take home (CD) for future use. Space is limited;
17 political scientists will be funded to attend (funding includes
travel, lodging, and tuition).
When and Where:
March 2010; TBD (1st or 2nd week), University of Colorado, Boulder
There are no prerequisites to apply, but this is a methods training
course. It is a hands-on workshop intended to build analytical skills,
with a particular focus on family and twin data (2010). The advanced
workshop (2011) will focus on molecular data. To get the most out of
the course, it is expected that applicants be comfortable with
structural modeling or Bayesian techniques, and be trained in basic
regression and statistical analyses.
A nice--albeit somewhat technical--paper (here) underscores an all-too common challenge in empirical legal studies: The perils of serial correlation and the threat it poses to independence assumptions in models. An excerpted abstract follows.
"In a recent securities law case, the statistical methods used by the regulator in analysing data on daily commissions and hypothetical profits from initial public offerings (IPOs) assumed that the data on consecutive days were independent. Consecutive observations in most business and economic data, however, are positively correlated. While statistical articles demonstrate that this type of dependence affects the distribution of virtually all statistics, including non-parametric and goodness-of-fit tests, the magnitude of the effect may not be fully appreciated. For example, in one comparison of commissions one broker received on days with an IPO to the days when no IPO was issued yielded a statistically significant p-value of 0.02, under the independence assumption. Accounting for serial correlation, the test actually had a non-significant p-value close to 0.09. Other examples of the effect of dependence include jury discrimination cases in locales where grand jurors can serve two consecutive terms as well as cases concerned with environmental pollution where measurements are spatially and temporally correlated. This paper describes the noticeable effect violations of the independence assumption can have on statistical inferences."
I previously posted on a series of bankruptcy papers that vigorously contest assessments of the recently reformed bankruptcy code. Over at Concurring Opinions Dave Hoffman (Temple) very helpfully weighs in on one of the more basic points in dispute--how to understand and account for selection effects. Dave's summary of the issue is well worth a read.
My thanks to Bill Henderson for his introduction, especially the last paragraph.
I am not a lawyer but a political scientist employed by a law school. My perspective on ELS is somewhere between a carpetbagger and a transplant. I share with my collaborators an interest in the law, but I am much more fascinated by the various models of human and institutional behavior implied by our research questions. Each one allows me to reach into my toolbox and pull out methods rarely used in political science, such as path analysis, structural equation modeling, network analysis or Weibull regression, in addition to probit, logit and ols. (I am fortunate to have access to the UCLA Statistical Consulting Group when I feel shaky ground.)
The variety of techniques that I and others deploy in ELS has me wondering about the processes by which other disciplines narrowed their toolsets. When I was in graduate school sociologists used ANOVA, economists used maximum likelihood and political scientists used logit and ols. There were some crossovers, but those were the exceptions. The methods in those fields have evolved as economists invaded expanded their portfolio into the other disciplines, and the fields became more statistically sophisticated. Hiving persists nevertheless, and I suspect that this is because the metrics are mature and research design is cumulative. Introducing a new technique to a field is not merely a question of teaching the methodology but also of gaining a foothold in the literature. The slow-motion adoption of social network analysis into political science is a good example. It's relatively easy to measure a network of individuals engaged in governing, but the language necessary to describe it as a political party is still evolving.
Which leads me to wonder about the utility of having a set of metrics and tools unique to ELS. An ELS toolbox would be the dialect of empirical legal studies, aiding the transmission of knowledge within our group while setting boundaries around what constitutes our field. Is this likely to happen? Probably not. We are not isolated enough. We don't have graduate students in the traditional sense, and our mandate is to produce new professionals not clones of ourselves. So we are in an odd position, building a field that is defined by the use of social science research methods, but without a set of methods to call our own and no prospect of creating one. This might be an advantage, as it gives us the liberty to borrow from everywhere, but it leaves open the question: what defines ELS? Is it what we study, how we study it or who does the studying?