Over at PrawfsBlawg Dan Markel (Fl. St.), with tongue planted firmly in cheek (presumably, hopefully), identifies an interesting research opportunity (here) for empiricists in general and T&E scholars in particular. Gallows humor aside, Dan's post backs into an important point--the relative paucity of well-structured natural experimental research designs for legal scholars.
The National Science Foundation (SES-0921008) is funding training for
full, associate, and assistant professors to attend a 5-day workshop
offered by the Institute for Behavioral Genetics. This is a hands-on
methods training course, not a lecture or seminar. Every participant
will have a laptop provided to share with one other participant, take
part in data analyses, develop structural models, and run various
statistical packages among many other exercises. A detailed
description of the course can be found at
(Note that this page contains the schedule for the last introductory
workshop in 2008 -- the final schedule for the 2010 workshop will be
posted in the coming weeks.)
Attendees will leave the workshop ready to take part in behavior
genetic analyses, and have access to scripts, statistical packages, and
training materials to take home (CD) for future use. Space is limited;
17 political scientists will be funded to attend (funding includes
travel, lodging, and tuition).
When and Where:
March 2010; TBD (1st or 2nd week), University of Colorado, Boulder
There are no prerequisites to apply, but this is a methods training
course. It is a hands-on workshop intended to build analytical skills,
with a particular focus on family and twin data (2010). The advanced
workshop (2011) will focus on molecular data. To get the most out of
the course, it is expected that applicants be comfortable with
structural modeling or Bayesian techniques, and be trained in basic
regression and statistical analyses.
A nice--albeit somewhat technical--paper (here) underscores an all-too common challenge in empirical legal studies: The perils of serial correlation and the threat it poses to independence assumptions in models. An excerpted abstract follows.
"In a recent securities law case, the statistical methods used by the regulator in analysing data on daily commissions and hypothetical profits from initial public offerings (IPOs) assumed that the data on consecutive days were independent. Consecutive observations in most business and economic data, however, are positively correlated. While statistical articles demonstrate that this type of dependence affects the distribution of virtually all statistics, including non-parametric and goodness-of-fit tests, the magnitude of the effect may not be fully appreciated. For example, in one comparison of commissions one broker received on days with an IPO to the days when no IPO was issued yielded a statistically significant p-value of 0.02, under the independence assumption. Accounting for serial correlation, the test actually had a non-significant p-value close to 0.09. Other examples of the effect of dependence include jury discrimination cases in locales where grand jurors can serve two consecutive terms as well as cases concerned with environmental pollution where measurements are spatially and temporally correlated. This paper describes the noticeable effect violations of the independence assumption can have on statistical inferences."
I previously posted on a series of bankruptcy papers that vigorously contest assessments of the recently reformed bankruptcy code. Over at Concurring Opinions Dave Hoffman (Temple) very helpfully weighs in on one of the more basic points in dispute--how to understand and account for selection effects. Dave's summary of the issue is well worth a read.
My thanks to Bill Henderson for his introduction, especially the last paragraph.
I am not a lawyer but a political scientist employed by a law school. My perspective on ELS is somewhere between a carpetbagger and a transplant. I share with my collaborators an interest in the law, but I am much more fascinated by the various models of human and institutional behavior implied by our research questions. Each one allows me to reach into my toolbox and pull out methods rarely used in political science, such as path analysis, structural equation modeling, network analysis or Weibull regression, in addition to probit, logit and ols. (I am fortunate to have access to the UCLA Statistical Consulting Group when I feel shaky ground.)
The variety of techniques that I and others deploy in ELS has me wondering about the processes by which other disciplines narrowed their toolsets. When I was in graduate school sociologists used ANOVA, economists used maximum likelihood and political scientists used logit and ols. There were some crossovers, but those were the exceptions. The methods in those fields have evolved as economists invaded expanded their portfolio into the other disciplines, and the fields became more statistically sophisticated. Hiving persists nevertheless, and I suspect that this is because the metrics are mature and research design is cumulative. Introducing a new technique to a field is not merely a question of teaching the methodology but also of gaining a foothold in the literature. The slow-motion adoption of social network analysis into political science is a good example. It's relatively easy to measure a network of individuals engaged in governing, but the language necessary to describe it as a political party is still evolving.
Which leads me to wonder about the utility of having a set of metrics and tools unique to ELS. An ELS toolbox would be the dialect of empirical legal studies, aiding the transmission of knowledge within our group while setting boundaries around what constitutes our field. Is this likely to happen? Probably not. We are not isolated enough. We don't have graduate students in the traditional sense, and our mandate is to produce new professionals not clones of ourselves. So we are in an odd position, building a field that is defined by the use of social science research methods, but without a set of methods to call our own and no prospect of creating one. This might be an advantage, as it gives us the liberty to borrow from everywhere, but it leaves open the question: what defines ELS? Is it what we study, how we study it or who does the studying?
I have posted on SSRN my article, forthcoming in Hastings Law Journal, Coding Complexity: Bringing Law to the Empirical Analysis of the Supreme Court. This article examines the well-known and widely-used U.S. Supreme Court Database (created by Harold Spaeth) – and most recently mentioned here – and addresses the Database’s limitations particularly for those interested in law and legal doctrine. The key point of the Article is that the Database does not contain complete or accurate information about law and legal doctrine as they appear in Supreme Court opinions. Given Harold Spaeth’s own purposes in creating the Database, these limitations may not be surprising -- although they do raise at least some challenges to his attitudinal model. Unfortunately, however, they are frequently misunderstood. Scholars all too frequently use the Database in ways that it simply cannot support, leading to the possibility of invalid or unreliable results. This post summarizes the Article’s main arguments. The primary challenges presented by the Database involve the coding for the “issue,” “issue area,” and “legal provision” variables. As the names of these variables suggest, they are frequently used by researchers interested in studying law and legal doctrine. Yet, the coding protocols for these variables (as set forth in the Codebook are not conducive to such research. Some of the limitations of these variables include: (A) The “issue” variable is not, despite its name, designed to identify any legal issues in a case. Rather, it is designed to identify the “public policy context” of a case. A case like Schenck v. Pro-Choice Network of Western N.Y. is one example. In Schenck, a group of abortion protesters challenged an injunction limiting their activities as violating the First Amendment. The only legal issue in the case involves the First Amendment and the limits it places on judicial power. But the Database codes the case as having an issue of “abortion” because that is the factual, or “public policy” context in which the case arises. (B) The coding contains a strong presumption of assigning each case only a single issue. So the Database does not add a First Amendment issue code to the coding of Schenck. (C) The issue codes are quite underinclusive and somewhat dated. For example, there are no codes for immunities, for sexual harassment, or for the dormant commerce clause. (D) Each of the approximately 260 issue codes is classified into one, and only one, of 13 “issue areas.” In some cases, the classification makes no sense. For example, in Markman v. Westview Instruments, Inc., the Court addressed the question of whether patent claims construction is a question for the judge or the jury; that is, whether there is a 7th Amendment jury right. The Database classifies Markman as a case about the right to a jury trial, but that code, which does not distinguish between civil and criminal jury rights, is located in the Criminal Procedure issue area. (E) The legal provision code does not identify cases or judge-made legal doctrines. It is limited to identifying statutes, constitutional provisions, and court rules. (F) The coding protocols provide that only legal provisions mentioned in a case’s syllabus should be identified. But the syllabus – a short summary of the case – is akin to headnotes. It is not officially part of the case, it is not written by the justices or their law clerks, and it cannot be cited by lawyers or judges. To some extent, misuses of the Database are likely due to differences in the ways that different disciplines (political science and law) use the same words. To some extent, misuses stem from scholars failing to evaluate their research design in light of the Database’s coding protocols, which are described in the Database’s Codebook. In my Article, I provide a series of examples of research project that fail to adequately take account of the Database’s limitations and that therefore produce results that may be inaccurate. To further explore the limitations of the Database and to experiment with more legally nuanced types of coding, I undertook a Recoding Project of a random sample of 10% of the cases from the last Rehnquist natural court. The details of the coding project are, of course, explained in the Article. Among other things, I redefined “issue” to mean legal issue, I expanded and rearranged the lists of issues and issue areas, I put no limit on the number of issues that could be coded per case, I redefined legal provision to include seminal cases and legal doctrines, and I identified legal provisions by looking at the opinions themselves, not just the syllabi. Some of the key findings of the Recoding Project include: (1) I identified an average of 3.7 issues and 2.4 issue areas per case, rather than the single issue and issue area per case identified in the Database. (2) I identified an average of 2.2 as many legal provisions per case as did the original Database. (3) A surprising number of legal provisions that I identified should have been identified in the Database because they were mentioned in the syllabi. (4) In both issue and legal provision coding, the “missing” codes – those that I identified but that the Database did not – disproportionately related to structural and jurisprudential issues, including procedure, the powers and operations of the federal and state governments, and the relationship between different branches of government. These and other findings have a variety of implications for researchers working with the Database. Chief among these is the importance of not drawing conclusions about the Supreme Court’s cases by looking at the numbers and types of issues, issue areas, and legal provisions coded. Researchers all too often rely on such information to draw conclusions about case complexity or about the number of issue “dimensions” in the cases. In other words, researchers sometimes point to the Database to justify their assumptions that most Supreme Court cases involve only a single issue. But as the Article demonstrates, this single-issue coding is -- or at least may well be -- an artifact of a coding protocol that presumes that each case should be assigned only a single issue, so such conclusions are circular. A second important implication is that the Database’s issues and issue areas do not accurately identify all cases involving particular legal issues and that not all cases with a particular issue or issue area code in fact involve the legal issues that a researcher might presume from the names of those codes.
Over at Concurring Opinions Max Miner has an interesting post about an old book, Burglars on the Job (by Richard T. Wright and Scott Decker). Miner's post emphasizes the authors' methodology:
"Rather than interviewing incarcerated burglars, they set out to find
active burglars in the community. They drew on a network of people who
they believed were likely to know criminals. Interviewees would
introduce them to burglars who in turn would introduce them to other
burglars. This approach introduces a selection effect, of course, but
avoids the obvious selection bias arising from only interviewing
burglars in prison."
Given the research question and real world limitations, I concede that some form of selection effect is perhaps inevitable. I am not sure, however, which injects more bias, only that different flavors of bias arise.
Information on the workshop, organized by Lee Epstein (Northwestern) and Andrew Martin (Wash U) and scheduled for June 23-25 in Chicago, is found here. A summary follows.
"The Conducting Empirical Legal Scholarship workshop is for law school faculty
interested in learning about empirical research. Leading empirical scholars Lee
Epstein and Andrew Martin will teach the workshop, which provides the formal
training necessary to design, conduct, and assess empirical studies, and to use
statistical software (Stata) to analyze and manage data. Participants need no
background or knowledge of statistics to enroll in the workshop."
Summer has arrived, finals are graded, and ELSers' thoughts inevitably turn to research and writing. With that in mind, I'm beginning a semi-regular series of posts titled "What Not To Do." The goal is to point out some common mistakes people make in presenting empirical/statistical analyses, and to suggest some better practices.
The first subject is naming variables, and has two parts. The first has to do with giving names to variables. While many variables "natural" names reflect their "natural" coding (think about a variable called age, for example), most (e.g., gender, or race, or partyid) do not. This occasionally leaves researchers in a bad spot; returning to some analyses done weeks or months before, one might wonder "Does gender=1 mean males or females?" A better practice is to choose variable names that indicate directionality whenever possible: female instead of gender, white instead of race, GOP instead of partyid, and so forth. (Of course, assigning variable and value labels will solve this problem, and is also good practice...)
Second, there is an unfortunate tendency to use variable names (of the sort used to identify variables in databases) in tables, figures, text, and the like. An anonymized example (culled from a relatively recent issue of a good law review) is here:
The variable names here are, shall we say, a bit opaque; moreover, they were clearly culled directly from the software output ("SubEqInv"? "LoneClub"?). The result is a table that violates Rule #1 of Tables and Figures: They should "stand on their own."
A better practice is to use variable descriptions, rather than statistical-software variable names, in tables and figures; an example of such a better use (also from a recent issue of a well-regarded law journal) is here:
Here, the names are full descriptions of the variables, and the result is a much clearer picture of what's going on in the analysis.
I'll talk more about tables of results in Part II.
As John notes, "unfortunately, counterintuitive empirical results almost always turn out to be wrong if they are not based on an appropriate empirical methodology for the inquiry at hand. In my opinion, the methodology of the Barondes (paper) is flawed, and the conclusions drawn from this research are either incorrect or unfounded." (To be fair, Barondes responds to some of John's critiques in comments to John's post.)
An interesting post (here) discusses the practice of "discounting" (often understood to mean: ignoring) empirical results, all in an effort to be "skeptical." An excerpt:
"A vast number of scientists have managed to convince themselves that skepticism means, or at least includes, the opposite of value data. They tell themselves that they are being “skeptical” — properly, of course — when they ignore data. They ignore it in all sorts of familiar ways. They claim “correlation
does not equal causation” — and act as if the correlation is
meaningless. They claim that “the plural of anecdote is not data” —
apparently believing that observations not collected as part of a study
are worthless. Those are the low-rent expressions of this attitude. The
high-rent version is when a high-level commission delegated to decide
some question ignores data that does not come from a placebo-controlled
double-blind study, or something similar."
The author goes on to note (with no small amount of irony) that: "These methodological beliefs — that data above a certain threshold of
rigor are valuable but data below that threshold are worthless — are
based on no evidence."