As previously mentioned, CELS 2015 will be held at Washington University Law School, in St. Louis, on Oct. 30-13, 2015. Conference organizers, Adam Badawi, Rebecca Hollander-Blumoff, and Pauline Kim recently announced the Call for Papers (here). Please note the June 26, 2015, submission deadline.
A small but interesting wrinkle. Many data sets include cases with missing data (hopefully not too many) and these cases will be excluded from many regression specifications. When "describing" the data set, does one use all of the cases (including those cases excluded from regression analyses) or just those cases included in the regression? If it's the latter, is there an easy way to generate basic summary statistics for the non-excluded cases? (The answer to the final question is "yes," and a helpful explanation--and illustrations--can be found here.)
Interest in specialized courts continues to grow, and some of this growth takes place outside of the U.S. To explore whether specialized courts achieve the goals claimed by proponents, a recent paper, Do Specialized Courts Make a Difference? Evidence from Brazilian State Supreme Courts, exploits variation in how Brazilian state supreme courts engage in constitutional review. In their paper, Carolina Arlota (Oklahoma) and Nuno Garoupa (Texas A&M) compare decisions from Brazil's non-specialized en banc courts and "specialized" court panels. An excerpted abstract follows.
"The dataset considered 630 cases of abstract review judged between January 1, 2006, and December 31, 2010, across twenty-five state supreme courts of the Brazilian federation. The main purpose of our inquiry is to determine whether or not there are significant variations in the outcome of the cases of abstract review as a function of a specialized panel. We find some evidence that the existence of specialized panels matters for the likelihood and rates of dissent as well as duration of procedures, but not for other variables. Implications for legal reform are also discussed."
Compare the following two visual displays of quantitative information. The former is described by some (here) as a "bad chart." While perhaps reasonable minds can disagree, the later is generally recognized as "probably the best statistical graphic ever drawn" (here).
Not infrequently researchers need to merge two separate data files into one. What should be an easy task is one fraught with tricky details. Structural issues (e.g., are you adding new "cases" or, rather, new data to existing cases) warrant initial attention as their resolution drives downstream ID file "linking" issues. Parts of this general issue are helpfully discussed in a thread (here).
"Using time diary and survey data from the Panel Study of Income Dynamics Child Development Supplement, the authors examined how the amount of time mothers spent with children ages 3–11 (N=1,605) and adolescents 12–18 (N=778) related to offspring behavioral, emotional, and academic outcomes and adolescent risky behavior. Both time mothers spent engaged with and accessible to offspring were assessed. In childhood and adolescence, the amount of maternal time did not matter for offspring behaviors, emotions, or academics, whereas social status factors were important. For adolescents, more engaged maternal time was related to fewer delinquent behaviors, and engaged time with parents together was related to better outcomes. Overall, the amount of mothers’ time mattered in nuanced ways, and, unexpectedly, only in adolescence."
Over at Concurring Opinions Dave Hoffman (Temple) wonders whether legal empiricists need to broaden their traditional approach when it comes to significance testing. Hoffman's take is that "... given what’s happening in cognate disciplines, it might be time for law professors to get comfortable with a new way of evaluating empirical work."
Registration is now open for the 14th Annual Conducting Empirical Legal Scholarship Workshop conducted by Professors Lee Epstein and Andrew Martin.
The workshop is Monday-Wednesday, June 15-17, 2015 at Washington University School of Law, St. Louis, MO
The workshop is for law school faculty, political science faculty, and graduate students interested in learning about empirical research and how to evaluate empirical work. It provides the formal training necessary to design, conduct, and assess empirical studies, and to use statistical software (Stata) to analyze and manage data. No background or knowledge of statistics is required.
The first part of the post on the ELS-Social Science interaction was based on some reflections of attending psychology and ELS conferences. This post is based on another hat that I have recently, as many law professors in Israel, started wearing – interactions with policy making at the national level. Interestingly, while many Israeli academics tend to complain (me included) on the relative lack of resources for conducting research, one of the nice things about Israel is that unlike the U.S, you don’t have to be someone in the caliber of, say, Cass Sunstein for policy makers to listen to you. It is very common for a rank and file academic in areas like law, economics or management to have a huge impact on national level policy-making.
In the recent year, I have spent one day a week in a think-tank in Jerusalem where I am involved in attempting to build a BIT (Behavioral Insight Team) for Israel and conducting various field studies on employment discrimination, curbing government corruption, and improvement of public sector employment.
In contrast to some of the other law professors who try to have some impact on legal and public policy, I see myself as an ELS scholar which raises the following tension that I feel is worth mentioning in such a blog.
Specifically, what is the minimal quality of evidence one needs to have in order to be able to support a policy initiative? Or, asking the questions differently, does it makes sense for ELS scholars to base their arguments on empirically driven reasoning, when empirical support is limited? The reasons why I think this question is especially interesting for ELS is that we operate in a legal world where most legal scholars base their policy arguments on common sense and intuition, which you would not see in psychology or economics.
An obvious and sensible response to this question is to conduct better research. However, there are some easy examples where even the best research is limited. Think, for example, on the empirical limitations from causality perspective on something like the long-term effect of child abuse. While it’s not my area of specialty, I can’t imagine a jurisdiction where someone would wait for the best research to support very strong interventions by authorities.
I will mention in short two examples in areas I know more about in order to better illustrate the dilemma I pose.
The first is the now famous initiative of building behavioral insight teams that will advise governments on using behavioral economics and psychology to improve regulation. There is now a lot of evidence coming from research done by existing BIT in the U.K, U.S, and in an increasing number of other countries. When mentioning one of their studies to policy makers in Israel, you will often hear that "this is not going to work in Israel." In some contexts, this statement makes a lot of sense and cultural variation is a big factor in topics such as compliance. Indeed, ideally, we would like to replicate these studies to Israeli population. However the question I want to raise is whether, in the meantime, when the opposite approach is adopted, purely based on common sense, partial data should still worth something.
A second example I was personally involved with is relates to curbing corruption. I was asked to prepare a program to curb governmental corruption which will be adopted by some parties for the recent elections we had in Israel. Some of my own suggestions were based on experimental studies I have conducted in a “corruption lab” that I was affiliated with in Harvard University. Most of the research , say in the context of corruption, comes from research I have done on platforms such as mTurk or the Harvard Decision Lab and similar platforms that used various psychologists and management scholars in the area of cheating. Can I even use this lab data to suggest something about how corruption should be curbed in the state of Israel? I can list numerous limitations to why there is a difference between what I have found in a lab in Cambridge, MA, and how Israeli politicians actually behave. Naturally, being fully honest about the limitations of the data and giving all the disclaimers are the best solution. However, there is also some level of self-deception and moral licensing here. I have noticed that people who are not empirically savvy, don’t really take your limitations and alerts seriously enough and numbers do have sometimes too large of an effect on people.
In this two-part post, I want to discuss two types of interactions I have witnessed many ELS scholars experience -- the interaction between empirical legal scholars and scholars from other relevant disciplines, and the interaction between empirical legal scholars and legal scholars. In the first part of the post, I will mostly focus on methodological and theoretical concerns while in the second part I will focus on finding-policy disparities. In both parts, I will only focus on the tensions, as I see them emerging, without offering solutions, which I think smarter people should focus on.
A first anecdote for the first type of tension was demonstrated through the presentation of my longtime collaborator Doron Teichman from HUJI on anchoring legal standards (Teichman, Feldman & Schurr (R&R)) in a conference in Israel where both psychologists and legal scholars attended as audience. Anchoring in short is usually defined as ”a cognitive bias that describes the common human tendency to rely too heavily on the first piece of information offered when making a decision” (Shrotriya & Pandey, 2013). The classical studies in psychology (e.g. Tversky & Kahneman, 1974) which examined anchoring usually relied on the effect of some random four digits number or the number that emerged from a wheel of fortune on people’s judgment. In contrast, the studies done by legal scholars, including us, are usually related to damages asked by lawyers or numbers that emerged from a different legal source.
The mostly justified criticism of this argument is that legal usage of anchoring is not considered to be a pure anchoring effect because the original stimuli is not completely orthogonal to the target, meaning that the stimuli is not clean and might carry other types of rational influence. Clearly, the number argued for by a lawyer, even in an adversarial system, does have some meaning regarding the size of the claim, relative to the wheel of fortune example in the original psychological experiment.
That being said, we ask ourselves whether empirical legal studies need to compete with psychology on the same term. If we take the example of the wheel of fortune vs. the number coming up from a legal suit, clearly the first carries much greater internal validity – any kind of effect in the first case could not be interpreted as nothing but a cognitive bias. Admittedly, this is not the case when we look at the influence in judgment following a legal claim. However if we see empirical legal studies as an academic community which tends to understand how cognitive biases might affect the legal system, clearly, the likelihood that someone would use a wheel of fortune to try and affect a judgment is farfetched.
An additional area where the difference between ELS and psychology can be seen is related to methodological norms I see in psych journals but not in ELS scholarship. Among them, are norms related to ‘open science’ and a need to deposit in advance research hypotheses and planned number of experiments. In addition, I see requirement for replication which I don’t yet see in the same level in ELS scholarship, where sometimes it is acceptable to have a paper with one experiment (as in some economics papers). Moreover while this is changing, as of now, the level of measures on behavior, usually gathered in psychological journal are more extensive than those I usually (but not always) see in legal publications.
On the other hand, it seems that ELS has some areas that might suggest its methodology to be superior. For example, regarding the issue of the type of participants, I have noticed researchers get criticisms in ELS conferences which they would not get in psych conference. In addition, the expectation for multiple methods seems to be more common request by legal reviewers of grant proposal, in a way which you would not always expect in pure psych grants (although this is changing too). Along the same lines, the need to take into account various theories and alternative explanations, even from different disciplines, is not at the same level in psych research where it is more common to use only one type of theoretical school of thought. In the regard, the mere fact that in contrast to psychology or economics, ELS does not hold a clear agency model, has both positive and negative effects on the freedom of choosing methods which could be used to map a certain phenomenon (e.g. no firm expectations to use incentive compatible design).
Ideally, with the establishment of ELS in the years to come, we would see two positive developments: The first is to have a more cohesive ELS, where, say, researchers who do ELS from a behavioral perspective would be more integrated with researchers who do ELS from an institutional perspective. The second one is that it will be harder to accuse ELS of using less “sophisticated” methods, relative to the disciplines where it borrows its methods from.
The presence of "too many zeros" is a common challenge in empirical legal research. For example, "most" cases do not pursue appeals, the outcome of many civil trials (e.g., a finding of no liability) generates "zero" damages, etc. Thus, the distributions of outcome variables of interest are not infrequently skewed and this data skew warrants attention.
"Tobit models are often applied to deal with the excess number of zeros, but these are more appropriate in cases of true censoring (e.g., when all negative values are recorded as zeros) and less appropriate when zeros are in fact often observed as the amount awarded. Heckman selection models are another methodology that is applied in this setting, yet they were developed for potential outcomes rather than actual ones. Two‐part models account for actual outcomes and avoid the collinearity problems that often attend selection models. A two‐part hierarchical model is developed here that accounts for both the skewed, zero‐inflated nature of damages data and the fact that punitive damage awards may be correlated within case type, jurisdiction, or time. Inference is conducted using a Markov chain Monte Carlo sampling scheme. Tobit models, selection models, and two‐part models are fit to two punitive damage awards data sets and the results are compared. We illustrate that the nonsignificance of coefficients in a selection model can be a consequence of collinearity, whereas that does not occur with two‐part models."
Last week I have discussed the role of legal theory in ELS. Fishman's paper serves as a great relevant example regarding this same tension (Fishman, 2013).
In the previous post, I have suggested that the greater interconnection between legal theory and empirical legal studies could also help mitigate the criticism on the dramatic increase of ELS in certain legal communities. In today's post I will examine a question I should have probably mentioned earlier; why are so many Israeli legal scholars doing ELS and what can we learn from this fact regarding the likelihood of such increase in other countries as well.
I have not studied this question seriously, but it seems to me that relative to its size, there are more Israeli scholars in ELS conferences than any other non-U.S. country. (However, there is a long list of biases, availability being an obvious example, that would explain why my perception may be inaccurate.) This kind of observation has been shared in the past by Oren Gazal-Ayal from Haifa in the context of Law and economics (Gazal-Ayal 2007).
Like scholars from other countries, in order for Israeli scholars to be promoted, they need to be published in top U.S. journals (this is of course not the case in all countries and I actually think it’s the exact opposite when it comes to larger European countries’ legal academia). The need to be published in U.S. journals in order to get promoted is giving an obvious advantage to legal scholars who were educated in the U.S. Furthermore, the argument that Gazal-Ayal raises with regard to Law and Econ is that Israelis who want to participate in the global market of ideas, with special emphasis on the U.S. academic market, focus on law and economics, which tends to be a more universal area, relative to more doctrinal areas of research which are more jurisdiction dependent. One might wonder whether this might be the case with regard to empirical legal studies as well. In principle, many similarities can be identified; in both approaches the necessary knowledge of the doctrine is minimal (but see my previous post), the knowledge of math and/or statistics, respectively, could replace knowledge of U.S. law, and both partly rely on disciplinary fields which argue to be universal (e.g. economics or psychology). However, there are also some notable differences between these two communities of knowledge. First, in many strands of empirical legal studies, much of an argument's development process is related to collecting data about legally relevant institutions. In many of these institutions, variation among countries is such that when brought as evidence, it could never influence American legal policy without further “local” findings. Another strand of empirical legal studies is based upon experimental methods. Admittedly, in most aspects of experimental psychology, the country where the experiment was conducted in is not as important. In the experimental legal analysis, however, questions of cultural context might play a much larger role than in many other applied sciences.
Another two colleagues of mine, Ariel Bendor and Yifat Holtzman-Gazit, have written a paper in Hebrew in which they suggest that the focus on ELS is driven from the pressure universities in Israel put on researchers to get grants. In many of the universities in Israel getting grants is necessary for promotion. (In Israel there are 4-5 ranks in contrast to 2-3 in the U.S. so promotion is more present in the life of Israeli scholars.) Naturally, they argue, it is easier to get grants for empirical projects relatively to theoretical ones. While I personally would like to think that this is not the case, I have no empirical evidence against it.
In sum, one might conclude that if similar trends exist in other countries (e.g. being universal rather than writing on local law and getting grants), my colleagues' prediction might suggest that ELS is about to become more and more global.
As noted elsewhere (e.g., here), recent public fascination with (and debate about) the color of a dress hints at larger issues germane to law, including the efficacy and reliability of video evidence. A Slate magazine piece considers explanations for "visual ambiguity."
In my previous post, I have discussed the potential criticism on the increase in the number of empirical legal scholars in a given community. To some extent, this criticism is related to the critique regarding the current structure of empirical legal studies, advocated by Hanoch Dagan, a prominent legal theorist and the former dean of TAU. Dagan has recently wrote a paper analyzing this topic with Roy Krietner and Tami Krichelli-Katz (forthcoming in Law and Social Inquiry 2015). In the paper, the three demonstrate how empirical research regarding topics such as compliance and employment discrimination, should be conducted.
The essence of Dagan, Krietner and Krichelli-Katz's argument, as I understand it, is related to the disconnect between legal theory and empirical legal scholarship. The three claim that empirical legal studies scholars' use law as a database for economic, sociologists of a psychological analysis rather than truly interact with normative questions that could be answered and should be focused on. Without interaction with legal theory, the recommendations of empirical legal studies will only influence public policy rather than legal policy.
It seems to me that there are two main accounts for their observation; one focuses on people, while the other focuses on the nature of the field.
The ‘people’ oriented argument is related to an effect we might refer to as crowding out. People have limited energy and ability, and therefor the focus on being methodically accurate crowds out the focus on theory. Other minor arguments also support this perspective; a methodological mistake is presumably unforgivable while lack of reliance on legal theory is. As ELS becomes more and more methodically sophisticated, scholars with a very rigorous methodological background are more likely to come from departments, which are not law.
The ‘field’ oriented perspective is related to the nature of many methodological approaches which focus on analyzing what is empirical research and examining the nature of the legal theory. Many scholars from departments like psychology and even law, view the behavioral analysis of law, which heavily relies on ELS, as an applied science. In such case, Dagan’s criticism is in place, as the likelihood that legal theory would be used properly is minimal. If however, the focus on empirical legal studies is aimed at contributing to a normative discourse that exists in law and the empirical component is able to fit into the theory, then the picture is different.
In the next post I will examine the background of the phenomenon discussed in this post and will discuss why is ELS so successful in Israel and whether it is going to spread to other countries in the same pace of success.