Herb Kritzer (Minnesotal Law), a seminal figure in the Law & Society movement, has recently posted a paper that provides a very exhaustive survey of emprical legal studies in the pre-1940 era. His paper, "Empirical Legal Studies Before 1940: A Bibliographic Essay", includes a full bibliography of the articles he located. The introduction of paper is worth posting in full:
“Empirical Legal Studies” is a term that began to come into vogue around 2000. ELS built on and extended the law and society (socio-legal studies) approach and the law and economics approach, both of which have strong empirical elements. Of course empirical research on law and legal processes predate law and society and law and economics, with a number of well known studies that were conducted in the 1950s and early 1960s. These mid-century projects include the American jury project (Kalven and Zeisel 1971), the commercial arbitration study (Mentschikoff 1952; Mentschikoff 1961), the court delay study (Zeisel and Callahan 1963; Zeisel, Jr., and Buchholz 1959), the pretrial settlement conference study (Rosenberg 1964), studies of Supreme Court decision making (Kort 1957; Kort 1966; Pritchett 1948; Schubert 1959; Schubert 1963; Schubert 1965; Snyder 1958), studies of the legal profession (Carlin 1962; Carlin 1966; Johnstone and Hopson 1967; Smigel 1964; Zander 1968), studies of compensation for injuries suffered in auto accidents (Connard, Morgan, Pratt, Voltz, and Bombaugh 1964; Hunting and Neuwirth 1962; Linden 1965; Morris and Paul 1962), and studies of the lower courts (Dolbeare 1967; Green 1961; Jacob 1969). Many of these studies of the 1950s and 1960s have framed research agendas that continue to this day. While the mid-century studies are reasonably well-known, much more obscure is the body of research conducted before World War II. In the 1920s and 1930s, and in a few cases even earlier, one can find a wide range of empirically-oriented research on law. The specific topics of this early research include:
Appellate courts and appellate decision making
Automobile accident compensation and litigation
Judicial staffing and judicial selection
Juries (both petit and grand)
Legal needs and legal aid
A significant portion of the early work was linked one way or another to the legal realist movement, and one can find some description of the research in historical treatments of legal realism (Kalman 1986; Schlegel 1980). In significant part because legal realism was essentially an American movement, almost all of the empirical legal research of the period was done by Americans focusing on the United States. The purpose of the discussion that follows is to highlight the range of empirical research on law from the pre-World War II period. To locate that research, I examined the tables of content of all law reviews published prior to 1940, reviewed a large number of English government reports (command papers), and followed up on any citations I could find within those works to other sources. While I do not discuss every study I found, I have provided as complete a bibliography of that research as possible. My discussion of the research is organized around a series of topics. While most studies easily fall within a single topic, a few stretch across multiple topics, and appear more than once.
A recent article posted on SSRN, Reproduction of Hierarchy? A Social Network Analysis of the American Law Professoriate, asks the intriguing question of to what extent do law professors influence the development of the law. To begin to answer the question, the authors apply social network theory to the social structure of legal academia, drawing on information on each tenure-track professor employed at an ABA-accredited law school. The authors tracked where each professor received his or her American law degree (they excluded faculty without such degrees) and where the professor now teaches and then used this data to generate a network analysis. This analysis produces visual representations of the resulting networks, like the one diagram below, which illustrates the centrality of a small number of schools – and especially Harvard and Yale – in this academic network. The authors caution, however, that such visualizations are not sufficient to understand underlying structural relationships. They therefore apply several statistical measures: outdegree (the number of arcs emanating from each node, or more concretely here, the number of faculty members produced by each law school), hubs (measuring the relative prestige of the origin and endpoint of each arc, that is, the relative prestige of each institution – “Hubs are the schools with a high degree of influence on other influential schools...”), and closeness (measuring the tightness of connections between different institutions). Like the visualization, these measures demonstrate an extreme skew towards the influence of a small number of schools. Finally, the authors use the network analysis to model the rate at which ideas spread from one institution to another – the institutions’ intellectual infectiousness. Although they acknowledge that this model involves a number of simplifying assumptions that themselves need to be examined, they argue that they provide a first cut at demonstrating how quickly and effectively new legal or intellectual paradigms can spread from one school to another where they are then incorporated into the education and socialization of future lawyers. This is one possible mechanism by which changes in the law become widely accepted. The piece has five authors all from the University of Michigan: Daniel Katz, Josh Gubler, and Jon Zelner, Eric Provins, and Eitan Ingall.
While failing to give CELS it's full due as THE annual conference on empirical legal studies, it does mention Lee Epstein's keynote speech. And the article includes pointed quotes from Jeff Rachlinski and Ted Eisenberg on the court's understanding of research on punitive damages.
I have posted on SSRN my article, forthcoming in Hastings Law Journal, Coding Complexity: Bringing Law to the Empirical Analysis of the Supreme Court. This article examines the well-known and widely-used U.S. Supreme Court Database (created by Harold Spaeth) – and most recently mentioned here – and addresses the Database’s limitations particularly for those interested in law and legal doctrine. The key point of the Article is that the Database does not contain complete or accurate information about law and legal doctrine as they appear in Supreme Court opinions. Given Harold Spaeth’s own purposes in creating the Database, these limitations may not be surprising -- although they do raise at least some challenges to his attitudinal model. Unfortunately, however, they are frequently misunderstood. Scholars all too frequently use the Database in ways that it simply cannot support, leading to the possibility of invalid or unreliable results. This post summarizes the Article’s main arguments. The primary challenges presented by the Database involve the coding for the “issue,” “issue area,” and “legal provision” variables. As the names of these variables suggest, they are frequently used by researchers interested in studying law and legal doctrine. Yet, the coding protocols for these variables (as set forth in the Codebook are not conducive to such research. Some of the limitations of these variables include: (A) The “issue” variable is not, despite its name, designed to identify any legal issues in a case. Rather, it is designed to identify the “public policy context” of a case. A case like Schenck v. Pro-Choice Network of Western N.Y. is one example. In Schenck, a group of abortion protesters challenged an injunction limiting their activities as violating the First Amendment. The only legal issue in the case involves the First Amendment and the limits it places on judicial power. But the Database codes the case as having an issue of “abortion” because that is the factual, or “public policy” context in which the case arises. (B) The coding contains a strong presumption of assigning each case only a single issue. So the Database does not add a First Amendment issue code to the coding of Schenck. (C) The issue codes are quite underinclusive and somewhat dated. For example, there are no codes for immunities, for sexual harassment, or for the dormant commerce clause. (D) Each of the approximately 260 issue codes is classified into one, and only one, of 13 “issue areas.” In some cases, the classification makes no sense. For example, in Markman v. Westview Instruments, Inc., the Court addressed the question of whether patent claims construction is a question for the judge or the jury; that is, whether there is a 7th Amendment jury right. The Database classifies Markman as a case about the right to a jury trial, but that code, which does not distinguish between civil and criminal jury rights, is located in the Criminal Procedure issue area. (E) The legal provision code does not identify cases or judge-made legal doctrines. It is limited to identifying statutes, constitutional provisions, and court rules. (F) The coding protocols provide that only legal provisions mentioned in a case’s syllabus should be identified. But the syllabus – a short summary of the case – is akin to headnotes. It is not officially part of the case, it is not written by the justices or their law clerks, and it cannot be cited by lawyers or judges. To some extent, misuses of the Database are likely due to differences in the ways that different disciplines (political science and law) use the same words. To some extent, misuses stem from scholars failing to evaluate their research design in light of the Database’s coding protocols, which are described in the Database’s Codebook. In my Article, I provide a series of examples of research project that fail to adequately take account of the Database’s limitations and that therefore produce results that may be inaccurate. To further explore the limitations of the Database and to experiment with more legally nuanced types of coding, I undertook a Recoding Project of a random sample of 10% of the cases from the last Rehnquist natural court. The details of the coding project are, of course, explained in the Article. Among other things, I redefined “issue” to mean legal issue, I expanded and rearranged the lists of issues and issue areas, I put no limit on the number of issues that could be coded per case, I redefined legal provision to include seminal cases and legal doctrines, and I identified legal provisions by looking at the opinions themselves, not just the syllabi. Some of the key findings of the Recoding Project include: (1) I identified an average of 3.7 issues and 2.4 issue areas per case, rather than the single issue and issue area per case identified in the Database. (2) I identified an average of 2.2 as many legal provisions per case as did the original Database. (3) A surprising number of legal provisions that I identified should have been identified in the Database because they were mentioned in the syllabi. (4) In both issue and legal provision coding, the “missing” codes – those that I identified but that the Database did not – disproportionately related to structural and jurisprudential issues, including procedure, the powers and operations of the federal and state governments, and the relationship between different branches of government. These and other findings have a variety of implications for researchers working with the Database. Chief among these is the importance of not drawing conclusions about the Supreme Court’s cases by looking at the numbers and types of issues, issue areas, and legal provisions coded. Researchers all too often rely on such information to draw conclusions about case complexity or about the number of issue “dimensions” in the cases. In other words, researchers sometimes point to the Database to justify their assumptions that most Supreme Court cases involve only a single issue. But as the Article demonstrates, this single-issue coding is -- or at least may well be -- an artifact of a coding protocol that presumes that each case should be assigned only a single issue, so such conclusions are circular. A second important implication is that the Database’s issues and issue areas do not accurately identify all cases involving particular legal issues and that not all cases with a particular issue or issue area code in fact involve the legal issues that a researcher might presume from the names of those codes.
Dave Hoffman over at Concurring Opinions wasn't the only blogger at CELS in Ithaca. Although he and I apparently went to different panels, like him, I thought the conference was excellent and the quality of the papers and discussion extremely high. Congratulations and thanks to all of the conference organizers.
One particularly interesting paper was, coincidentally (or not) co-authored by Dave Hoffman, and was previously blogged about here. The paper, Whose Eyes Are You Going to Believe? Scott v. Harris and the Perils of Cognitive Illiberalism by Dan Kahan, David Hoffman, and Donald Braman (forthcoming in Harvard Law Review), takes advantage of a unique experiment made possible by modern technology. In Scott v. Harris, the Supreme Court addressed whether summary judgment was appropriate in a claim of excessive force where a police officer rammed his car into the car of the fleeing suspect, who was rendred quadriplegic. The Court held that the use of deadly force was reasonable under the circumstances, given the risk that the car chase posed to the public. The Court rested its conclusion on the contents of a videotape, shot from the police car itself, that was entered into evidence and that the Court posted on its website. Interestingly, however, despite the fact that the Court said that no reasonable juror could find the use of force excessive, one Supreme Court justice -- Justice Stevens -- concluded otherwise.
Taking advantage of the now publicly available videotape, the paper's authors showed the video to 1350 Americans. As they explain, "a majority agreed with the Court's resolution of the key issues, but within the sample there were sharp differences of opinion along cultural, ideological, and other lines. We attribute these divisions to the psychological disposition of individuals to resolve disputed facts in a manner supportive of their group identities." So individuals who tend to see the world hierarchically (demographically more likely to be white, male, and from the South or West), were more likely to agree with the Court majority than were individuals who take a more egalitarian perspective (demographically more likely to be nonwhite, female, and from the Northeast).
Normatively, these observations suggest that judges should, at a minimum, be cautious about the claims they make about what a "reasonable juror" could conclude. (Indeed, as the discussant, Neal Feigenson, pointed out, even if all members of the jury were completely average across all of the dimensions identified by the authors, there would still be a significant probability that at least one juror would believe that the police used excessive force. ) Assuming that one's own views are the only reasonable views, which is essentially what the majority did, "invested [the Court's] decision with culturally partisan overtones that detracted from the decision's legitimacy." As the authors point out, when different sorts of people have predictably different perspectives, deliberation is particularly appropriate.
This project suggests an interesting take on whether the standard for judgment as a matter of law -- which is, the Court has emphasized, the same is the standard for summary judgment -- should be different. Perhaps once a jury has in fact heard all the evidence, it should be allowed to render a verdict, and the fact of that verdict should help to inform the judge's ruling on the JML motion. This comes up often in employment cases, where JML is often sought -- and apparently disproportionately granted -- by employer-defendants. (See here and here for more of my thoughts on courts' overzealousness in granting summary judgment and judgment as a matter of law in employment cases.)
[Update: Paul Caron (Cincinnati), Michael Madison (Pittsburgh Law), Jeff Lipshaw (Suffolk), Jim Chen (Louisville) have picked up on the analysis in the below post. There seems to be some misunderstanding on my point of "long-term contracts." In retrospect, I should have said "long-term commitments" (i.e., extra-legal and perhaps not committed to writing) to avoid what I think is an unproductive analysis of run-of-the-mill employment and commercial contracts.
I am talking about this: Academic X says, "I will stay here X number of years and ignore outside offers if you provide me with the resources to execute the following institutional plan [e.g., labor-intensive but high-yield teaching, public service, useful scholarship that will be noticed and solve a real world problem, etc.]." Law School Y says, "I love this idea. If you are right, it will grow our institution. Because you have committed to building it here, School Y will fund it." Because both Academic X and Law School Y have aligned personal and institutional agendas, their cooperation and commitment grows the institutional pie; both are made better off. Moreover, it becomes magnetic for other scholars and funders who share the substantive vision.
So we are talking about communitarian norms here. This type of approach is easy in small groups, which is what law faculty are. Firm-specific capital in law firms is harder to grow/maintain because (a) they have gotten larger, (b) covenants not to compete are prohibited, and (c) there are liquidity constraints imposed by the ban on non-lawyer ownership. On the other hand, law firms work harder at it because they increasingly operate in a competitive national marketplace--firm-specific capital can be huge competitive advantage. Law schools, in contrast, are not subject to the same market pressures--the most elite have huge endowments and donors who want to give more to be associated with the elite brands. Thus, in the legal academy, the free agent ethos is damn near ubiquitous.
No need to be abstract about all this. I lay out a highly plausible counteractive approach in this comment.]
Several bloggers have noted Clayton Gillette's recent article, Law School Faculty as Free Agents, 17 J. Contemp. Leg. Issues 213 (2008). See, e.g., Paul Caron, Larry Ribstein, Al Brophy, and Paul Secunda. Gillette's essay provides the type of straight thinking needed to move the Moneyball-Moneylaw debate into a mode of institutional analysis that can produce actual results. I will briefly lay out Gillette's analysis and then extend it to a concept I call "school-specific" capital--an analog to firm-specific capital.
Law Professor Free Agency
In a nutshell, here is Gillette's argument. The lateral market for law professors is primarily based upon scholarship, which is an observable, coveted good. Teaching and service, to be sure, are relevant goods, but they are hard to measure. Further, faculty make hiring decisions; when they land a high profile scholar, they share equally in the school's reputational gain (albeit these gains are largely limited to opinions of other professors). Yet, if new colleagues shirk committee work or are disengaged and uninspiring teachers, the costs borne by individual faculty members are negligible or non-existent. Hence scholarship becomes the focus of lateral hiring. Clayton observes,
In 30 years of teaching, service as vice dean, and membership on appointments committees, I don’t believe I have ever heard a discussion of a candidate’s qualifications that included serious consideration of institutional service, except insofar as it related to scholarship. ...
[H]iring schools tend to invest little in discovering teaching quality. The hiring decision is typically made after one or two faculty members at the hiring school attend one or two of the visitor’s classes, and that is done through a process (e.g., informing the visitor when faculty members will attend, and allowing the visitor to choose that time) that diminishes the likelihood that those classes will be representative. ... The result is that, as opposed to the meticulous, highly tailored criticism to which a candidate’s scholarship will be subjected, a candidate’s teaching will be evaluated largely to determine whether it is “good enough.” (pp. 228-29)
Gillette's key insight is that the lateral market in legal academia, unlike baseball (a crucial point), does not force the decision-makers [faculty] to internalize the benefits and costs of free agent activity: Some costs potentially get externalized onto the students, alumni and law school administrators. When scholarship opens so many doors, Gillette suggests, it is easy to see how a more robust lateral market can skew institutional incentives and detract from overall educational quality.
To my mind, Gillette sets forth a very coherent and plausible analysis. [I suspect a lot of people will quibble with it, however, believing that their own lateral experience (or aspiration) reflects a more optimal outcome at the institutional level. Listeners interested in the merits of this debate should weigh the critic's potential bias.] It is an open question whether lateral mobility is really on the rise. At Indiana Law, we are building a law faculty universe database that covers 80 years of AALS schools. See "Is Lateral Movement on the Rise? A Precise Answer is on the Way," ELS Blog (Dec. 21, 2006). We see a lot of lateral movement in the 30s, 40s, 50s, 60s, and 70s. Eventually we will answer to the nagging empirical question of whether lateral movement is truly on the rise.
But one thing I can say with confidence--information published on the Internet (Leiter Faculty News and Concurring Opinions) has increased the perception of heightened movement. And perception is all that is necessary to change behavior and institutional norms--possibly in the wrong direction.
Gillette actually understates his argument. Specifically, the proliferation of a free agency ethos not only undercut educational quality, it inhibits the cooperative, highly committed, selfless environments need to create truly exceptional institutions. One of the major implications of more professor mobility is the diminution of "school-specific" capital--i.e., desirable law school attributes, such as innovative curriculum, public service reputation, alumni good will, that remains largely intact when a professor leaves. So more free agency suggests fewer law schools that transform
good human capital into great human capital. On this score, the "best" law schools can, in fact, be pretty mediocre. (I believe there is a way out of this box, which I will address below.)
Last Friday's NYT (8.8.08) included a lead story in the Business section (at C1) on a forthcoming JELS article. The subject of the NYT piece--a forthcoming article in 5:3 JELS (Sept. 2008)--reports results from a study of the financial consequences of case settlement decisions. To better assess the financial cost of going to trial, the study analyzes cases in which a settlement offer was considered, but rejected in favor of proceeding to either arbitration or trial. The findings reveal the influence of contingency fee arrangements and the availability of insurance coverage on plaintiff and defendant settlement decisions, respectively.
An article recently posted on SSRN provides some interesting data about how employers and employees fare when arbitrators’ decisions are reviewed in court. In Do Courts Create Moral Hazard? When Judges Nullify Employer Liability in Arbitration: An Empirical Analysis, Michael Leroy argues that the possibility of such review – especially when the arbitration clause provides for de novo review, as many do – creates a systematic advantage for employers. Leroy documents a growing number of bases on which courts (particularly state courts) vacate arbitration awards, providing more opportunities for successful challenges to arbitrators’ decisions. Perhaps most importantly, however, Leroy measures the rate of reversal of arbitrators’ decisions. Out of a dataset of 267 separate arbitration decisions, Leroy found that federal courts are routinely extremely deferential to arbitrators’ decisions, upholding decisions for both employers and employees at similar and extremely high rates. As a general matter federal courts upheld awards for employees at a rate of 85% and for employers at about 92%. In state courts, however, the picture is more complex. There were larger differences between trial courts and appellate courts, for one thing, but more striking is the difference in upholding the awards for employers as opposed to awards for employees, particularly at the appellate level. State trial courts and appellate courts both upheld awards in favor of employers at a rate of about 87%. But for awards for employees, trial courts upheld them 77.6% of the time, while appellate courts upheld only 56.4% of such awards. The significantly higher rates of vacatur of employee awards in state courts, Leroy argues, creates a moral hazard for employers. Their incentives are to require employees to sign arbitration agreements that allow for expansive review in state court. If the employer wins in the arbitration, its chances of prevailing under court review remain quite high. On the other hand, if the employer loses in the arbitration, the generous review offered by state courts essentially gives it a second bite at the apple. As a result, Leroy argues, employers may have less incentive to curtail legally risky behavior because they are less likely to have to pay for the consequences if sued. This article, while quite different in its focus, is reminiscent of the findings of two articles examining differential appellate court treatment of plaintiffs versus defendants. In Plaintiphobia in the Appellate Courts: Civil Rights Really Do Differ From Negotiable Instruments, a 2oo2 article in the University of Illinois Law Review, Theodore Eisenberg and Kevin Clermont found that in federal civil rights employment cases that terminated between 1988 and 1997, defendants who appealed trial losses prevailed on appeal 44% of the time. In other words, where a defendant appealed a verdict, generally entitled to enormous deference, there was an almost even chance that the appellate court would reverse. In contrast, an employment plaintiff who appealed from a pro-defendant verdict had only a 6% chance of prevailing. As a point of comparison, the overall reversal rate from all civil trials was 18%. In a more recent follow-up (blogged about here), Plaintiphobia in State Courts? An Empirical Study of State Court Trials on Appeal by Theodore Eisenberg and Michael Heise, examined the outcomes of more than 8000 trials and about 550 appeals from 46 large counties. They found that in general, plaintiffs fare worse on appeal than defendants and that the appellate courts are more deferential to bench verdicts than jury verdicts. Consistent with the first Plaintiphobia article and with Leroy’s findings, the plaintiff/defendant disparity was very stark in the context of employment cases – with 61.5% of verdicts for plaintiffs reversed and 38.5% of the verdicts for defendants reversed. (Both of these reversal rates are higher than the overall numbers across all case types – 41.5% of verdicts for plaintiffs reversed and 21.5% of verdicts for defendants.) All three articles discuss possible reasons for the observed disparities between plaintiffs and defendants. The Plaintiphobia articles do not find strong evidence to support selection effects, and conclude that their findings are consistent with attitudinal effects – specifically that appellate judges believe (possibly erroneously) that juries are biased towards plaintiffs. Leroy attributes the disparities at least in part to the expansion of bases for reversal of an arbitration award – a doctrinal development. Moreover, there is the possibility of a snowballing effect on doctrine – the more pro-employer cases that are decided, the more pro-employer the law becomes. At minimum, however, these articles collectively raise questions about whether the appellate playing field is level for employers and employees.
Update: It's worth noting that the two Plaintiphobia papers analyze appeals in all different kinds of cases, not limited to employment cases.
The bi-modal distribution (graphic to the right) continues to generate interest in the blogosphere. See, e.g., Greg Mankiw, Right Coast, Broken Symmetry. The chart summarizes the starting salaries for lawyers who graduated from law school in 2006. One reason the bi-modal structure is so jarring is that it demonstrates that measures of central tendency, such as average or median, are not necessarily reliable guides for law students' future earning power. In conventional labor markets, that disconnect is rare.
NALP recently dug into its archives to determine whether this stratification is a persistent feature of the entry level law market. See NALP Bulletin (Jan. 2008). It turns out that 15 years ago, the market followed a much more traditional distribution. The chart below summarizes the salaries for the class of 1991.
The 1991 graph is right skewed but bears some resemblance to a normal curve. Below is the graph for 1996:
The rightward skew is a bit more pronounced and the area under the $75K to $85K range is becoming more substantial. A more seismic shift is seen in 2000 (below) with the emergence of a second mode at the $125K price point.
At the height of the Internet boom, a remarkable 14% of all entry level lawyers took jobs at the $125K level. According to NALP, "never before had a single salary so dominated the landscape."
Under the 2006 bimodal distribution (see chart above),
44% of graduates received entry-level salaries in the $40K to $60K
range; yet, the second mode moved further to the right ($135K to $145K) and
grew to 17% of all graduates.
Over the next couple of weeks I plan to blog about some new research, with a particular focus on unusual techniques or interesting research questions. Several of the research projects I will discuss came to my attention through the Law and Society Conference in Montreal at the end of May.
One interesting presentation was a paper entitled Hustle and Flow: A Social Network Analysis of the American Federal Judiciary by Daniel M. Katz and Derek K. Stafford, both political science graduate students at the University of Michigan. Katz and Stafford are interested in the social structure of the judiciary and, more controversially, whether that structure has an effect on doctrine or case outcomes. Different from the prevailing models of judicial behavior (attituindalism, legalism, or the strategic model), their hypothesis is that judges -- at least sometimes -- are influenced by "peer effects," not just by their own political views or by legal sources. (Such social pressures could be a partial explanation for the panel and circuit effects documented on appellate courts. See, for example, Kastellec, Jonathan P., "Panel Composition and Voting on the U.S. Courts of Appeals Over Time" (May 14, 2008). Available at SSRN: http://ssrn.com/abstract=1012111) and Kim, Pauline T., "Deliberation and Strategy on the United States Courts of Appeals: An Empirical Exploration of Panel Effects" (March 31, 2008). Available at SSRN: http://ssrn.com/abstract=1115357 ).
In order to study these social or peer effects, they must find a way to describe the social networks of the judiciary. As an initial attempt to do so, Katz and Stafford employ new network analysis techniques to measure the paths that law clerks take between judges. In other words, they chart the likelihood that a law clerk for, say, Judge Kozinski, will later clerk for Justice Kennedy, or that a law clerk for a particular district court judge will go on to clerk for a particular appellate court judge. Their study "visualizes law clerk traffic" during the last Rehnquist natural court and produces some interesting representations of the relationships between judges. (They identified about 900 clerks who moved from one judge to another during this period.)
The graphic depiction of these law clerk moves is one of the more interesting aspects of the study. In one depiction, Supreme Court justices are, not surprisingly, clustered in the middle, but -- more surprisingly -- district court judges are "suffused throughout the network," not relegated to the periphery. This is one representation, Katz and Stafford suggest, of the fact that judges with equivalent institutional authority in fact have different levels of influence. Below is one such figure from the paper -- the Kamada Kawai Energized Network. (Yellow nodes are Supreme Court justices; green are appellate court; and blue are district court judges.)
It will be interesting to hear more from these authors as they apply their methods to other aspects of the judiciary.
There is an interesting story in the Sunday New York Times about a recent study by George Korniotis (Fed Bd of Governors) and Alok Kumar (Texas Business), entitled "Long Georgia, Short Colorado: The Geography of Return Predictability" (on SSRN here).
In a nutshell, the article documents that individual investors are overinvested in businesses that are located closer to them. Thus, when state or regional economic conditions deteriorate relative to the country as a whole, the sale of stocks--to maintain one's standard of living--has a predictable, disproportionate effect on the stock prices of companies headquartered nearby. There is also some interesting methodology in this paper, at least for those of us interested in testing the effects of geography on business or law. Here is article abstract:
This paper shows that returns of U.S. state portfolios are predictable. In the presence of local bias and incomplete risk sharing, consumption smoothing motives of local investors generate predictable patterns in the returns of local stocks. Specifically, local investors require higher average future returns to hold risky local stocks when local economic conditions worsen and they face stronger borrowing constraints. The state portfolio returns are predictable both in the short-run (one quarter ahead) and the long-run (up to 24 quarters ahead). The predictability is stronger among less visible firms and in regions where investors exhibit stronger local bias and hold more concentrated portfolios. Trading strategies that exploit the state-level predictability earn annual risk adjusted returns of around 6-8%. Overall, these results indicate that the stock return generating process contains a predictable local component.
Last fall I posted charts that showed the same data ordered by 2007 U.S. News rankings. I took a lot of heat in the comments from readers who wanted to defend these liberal transfers--many by people who benefited from the ability to trade up. This system of liberal transfers has other defenders as well. For example, professors at higher ranked schools often justify these policies by arguing that they are opening doors for hardworking students who have proven themselves. My critique, however, was based on a macro-level social wealth perspective--i.e., do the benefits of liberal transfers policies outweigh the costs? Here are some facts:
More transfers. Between 1997 and 2004, non-academic 1L attrition, which is the ABA-LSAC Official Guide category that would include transfers, has increased at statistically significant levels. See Morriss & Henderson, Measuring Outcomes, Fig. 2 and accompanying text. I would posit that this coincides with a rankings payoff of higher entering credentials, not a rethinking of admissions policies driven by principles of equity and opportunity. We should not confuse colorable justifications, which lawyers are expert at, with underlying motivations of self-interest.
Perceive their relationships with other students to be as
positive as students who did not transfer.
Work with other students outside of class to complete an
Have serious conversations with students who are different
Discuss ideas from reading or assignments with others
outside of class.
Work on a paper or project that required integrating ideas.
Participate in cocurricular activities.
P. 14-15. The report continues, "These findings underscore that many of the strongest student
relationships are formed during the first year of law school before transfer students
join the campus community." (p. 15). Note these findings describe group results. Please spare me anecdotes in the comments about well-adjusted transfer students. We
cannot generalize, and thus make policy, from individual data points. We need a
representative sample, which LSSSE provides.
Harms to Curricular Innovation. If the best students from Tier 3 and 4 are siphoned by Tiers 1 and 2, lower ranked schools have one hell of a time demonstrating the value of their curricular innovations. And these innovations often require immense commitments of time and money. The substantive value may be there, but the school will not get an adequate return on the investment. So incentives to innovate are diluted.
So my point is very simple: Instead of shrinking the 1L class and taking more transfer students, a better system would admit more 1L students using whole person review (the policy that all of the legal academy allegedly embraced in the Grutter litigation) rather than a cynical numbers-driven approach designed to maximize entering credentials for U.S. News purposes. I realize that some people would be made worse off under this system (and these people tend to know who they are, so they are noisy). In contrast, the people who would benefit are largely nameless and invisible. But this approach eliminates the educational costs of excessive transfers and preserves incentives for healthy institutional competition based on curriculum and teaching. From a social wealth perspective, which is better?
At the end of the day, the increase in transfers represents a giant collective action problem. But it could be solved if U.S. News reworked its methodology. The putative regulator here is the ABA Section on Legal Education and Admission to the Bar. Are they even paying attention? I often wonder. After the jump are the charts that show net transfers by U.S. News rank. It is very ugly.
I recently ran across this audio file of a talk given at Harvard Law School in 1995 by Charles Munger, entitled "Causes of Human Misjudgment." Munger is a Harvard Law alum and a founding partner of the L.A. law firm of Munger, Tolles & Olsen. He later left the firm to run an investment fund. In the mid-1970s, he joined Berkshire Hathaway to serve as Vice-Chairman with Warren Buffett.
Make no mistake: Munger is explicitly talking about the intersection of economics and psychology, acknowledging that the nascent (at the time) field of behavioral economics was on the right path and citing insights of Daniel Kahneman and Amos Tversky (note seven years before Kahneman won the Nobel Prize).
I am particularly interested in applied behavioral economics. And here I don't mean writing papers on the topic; rather, I mean honing my own decisionmaking processes to eliminate bias and susceptibility to manipulation. For this purpose, I doubt I will ever find a better resource. A written version of this talk, including an interesting preface, is online here. It has quite a few things to say about academia and the pervasive problem of the "truffle hound." (You will have to read/listen to the links to figure out the truffle hound reference.)
Finally, Munger's remarks inspired Paul Brest (Hewlett Foundation, former dean at Stanford) and Laura Hamilton Krieger (UC Berkeley) to create a law school course on professional judgment and decisionmaking at Stanford Law. The course began in the early 1990s and continues to this day (materials are online at www.professionaljudgment.org). Several prominent behavioral law & econ scholars took this course while students at Stanford, including Russell Korobkin (UCLA), Jeff Rachlinski (Cornell), and Chris Gurthie (Vanderbilt). In case there was any doubt, cornerstones of the course appear to be empiricism, applied probability theory, and due attention to disconfirmatory evidence in the face of cognitive biases. Great stuff.
For those interested in the continued (and sometimes difficult) relationship between science and the law, I encourage you to check out my Vermont Law School colleague Craig Pease's regular "Science and the Law" column in The Environmental Forum. Check out the latest edition entitled The Absurdity of Individual Harm.
Another recent BJS report, Felony Defendants in Large Urban Counties, 2004, analyzes data collected from a representative sample of felony cases filed in the nation's 75 largest counties during May 2004. Murder cases were tracked for up to 2 years and all other cases for 1 year to provide an overview of the processing of felony defendants from case filing to disposition and sentencing. Data highlight the demographic characteristics of felony defendants and types of arrest charges. The report also includes in-depth information on the criminal record of felony defendants, including criminal justice status at the time of arrest and the number and type of prior arrests and convictions. It describes conditions of pretrial release (bail amounts, type of release bonds, and pretrial misconduct), adjudication outcomes (dismissal, diversion, guilty plea, trial conviction rates), and sentencing data for convicted felony defendants. Notable findings include:
Two-thirds of felony defendants were charged with a drug or property offense.
More than three-fourths of felony defendants had a prior arrest history, with 53% having at least five prior arrest charges.
Fifty-seven percent of felony defendants received a pretrial release prior to adjudication.