I've been spending a lot of time recently thinking about the "great divide" between legal academics and political scientists. Even today, with many avenues for communication between these fields -- like this very blog -- there is less than one might expect. Certainly, there are many in both fields who read and talk across the divide, and I'd expect that many readers of this blog are among those. But there remains the puzzle of why this is the exception and not the rule.
One possible reason has to do with substantial differences in the norms of scholarship in the different fields. Articles in the social sciences in general tend to be much shorter than law review articles, and more narrowly focused. They are also more likely to involve quantitative empirical analysis and to emphasize methodology. Law review articles, on the other hand, are notoriously long, but a single piece can explore a variety of theories and themes and is often focused on the broader implications of legal opinions, events, or analysis. As a result of these differences -- and leaving aside criticisms of the underlying content of the scholarship -- I think that legal academics often find social science pieces either opaque or -- due to their narrow focus -- uninteresting. On the flip side, of course, social scientists often find law review articles tedious and imprecise.
The increasing interest in empirical legal scholarship in the legal academy may help temper some of these tendencies, but there is a long way to go. Scholars in both legal academia and in the social sciences could go much further in trying to make their work understandable to people outside their own field. As but one example, I'll hold out the recent work of Thomas G. Hansford & James F. Spriggs II. Their book, The Politics of Precedent on the U.S. Supreme Court explores the ways that justices use precedent. The book is ambitious and quite interesting. Its argument and conclusions, however, are not written in ways designed to get the attention of legal scholars who are not themselves engaged in empirical work, despite the fact that at least some of those conclusions may be interesting to qualitative scholars of the Supreme Court. In a thumbnail review in the Law Library Journal, for example, the reviewer dismissed the book as "better suited for a political science class than a legal audience..." The book itself contains a lot of sophisticated statistical analysis that is not translated for those who might not be familiar or comfortable with that methodology, and it assumes familiarity with political science norms and resources (the Spaeth Supreme Court databases, for example). All of this is unfortunate because it makes it less likely that the work will be read (much less understood) by most legal scholars. And to reiterate the point of my italics above -- this is especially unfortunate to the extent that the Hansford and Spriggs' findings would be interesting to the many law professors who are not themselves empirical legal scholars, but who study the Supreme Court from other perspectives.
So what to do? More co-authorship would be helpful. More self-consciousness about the ways in which the norms of one field may exclude or deter readers from another. I'd be particularly interested in seeing different presentations of the same work in different settings. Hansford and Spriggs, for example, might consider a law review article that presents their work to a legal academic audience. (Of course, they may already be doing this.) With apologies to Hansford and Spriggs, I hope that this blog post helps nudge social scientists and legal academics alike towards learning to speak a bit of each others' languages.
For anyone interested, a tentative CELS conference schedule is available at the CELS Conference website (this year hosted at NYU and found here). Individual paper discussant assignments will be made, and posted, promptly.
Over at Conglomerate, David Zaring (formerly a W&L law professor, now at Wharton) expands upon Greg Mankiw's observations on the sociology of economics, including musing on why economists are (or come off as) so great. Here is an excerpt:
[T]here’s no question that economists are well trained social scientists, with plenty of math and a confident professional ethos. Economists are also good in workshops – and I occasionally think that the old law professor credo of “have a theory about anything and speak in full paragraphs rich with prosody” is less popular in the interdisciplinary seminar rooms of our universities now that economists are coming too, and biting into the confounders and omitted variables. ...
Economists may crush all comers in seminars, Mankiw thinks, because "economics may attract people with a particular set of personality attributes, and perhaps these attributes are not the same set of attributes you might choose for your next dinner party.”
Mankiw's original post also contains this nugget:
[T]he set of advocates who are economists is quite small (I don't know if this reflects treatment or selection). In general, economists are more likely to make up their minds about whether a particular policy works based on theory or data. They may have priors, but not the the sort of "do-gooder"priors that advocates have. One of the reasons that economists are so aggressive with the non-economists is that we want to expose all the priors immediately.
I love this statement, at least as an aspiration. Of course, as noted in one recent critique of economists (see "Economics is a 'Triumph of Theory Over Fact'"), it is also possible to be so in love with your theory that you dismiss any need to test its most basic assumptions.
Today's WSJ Op-Ed page includes an essay (regrettably, a direct link is not possible w/out a subscription) that relates to a recent ELS Blog Forum featuring work by Richard Sander (UCLA) and responses to that work by Richard Lempert (Michigan).
This year, I was fortunate to an organizer of the 6th Annual ISBA Solo & Small Firm Conference. In wrapping up the conference and preparing for next year, one of our first tasks was to review the speaker evaluations generated by a SurveyMonkey.com questionnaire--and note, I was one of the speakers.
Although there is a controversy within the academy on whether teacher evaluations can be trusted, my colleagues at the ISBA had no problem using these scores to make future programming decisions. Note that the organizers attended a large proportion of the sessions (and a few of us were also presenters); the evaluations seems to confirm our own impressions of speaker quality. There were no surprises. (Disclosure: my own evaluations were good but not spectacular.)
I got the impression that the ISBA approached the situation in the same way as any business trying to improve its product: Each speaker got a copy of his or her scores plus excerpts from the narrative comments; many will be invited back, but a few will not. Frankly, after reflecting on this experience, I think some of the academic debates on the value of student evaluations (see, e.g., this bibliography) would be ridiculed by practicing lawyers who are used to delivering value or losing a client. Virtually all lawyers involved in the conference would agree that the consensus view of their colleagues is what matters. This is a very pragmatic approach that is hard to dismiss.
If the judgment of lawyers can be trusted, what about law students? In an earlier post, I defended the validity of law school teaching evaluations. A recent article by Deborah Jones Merritt, "Bias, the Brain, and Student Evaluations," has the right idea: Refine the teacher evaluation process and improve its validity. But don't make the leap that the quality of legal instruction cannot be measured.
This past May I posted about a $51 million gift to the Marquette University Law School toward construction of a new facility. Now, while the ELS Blog rarely posts off-topic news, this additional news promises to
be another important moment for the Marquette University Law School.
Joseph J. Zilber, Milwaukee philanthropist, real estate developer and Chairman of the Board of Zilber Ltd., a real estate holding company, announced a $30 million gift to the Marquette University Law School. The University press release is here and here. Mr. Zilber, who graduated from the Marquette University Law School in 1941, has directed that $5 million of the gift be used to support
construction of the Law School and that $25 million be associated
with law student scholarships.
In something of a throw-away line in a recent post describing the birth of a new blog over at PrawfsBlawg Ethan Leib remarks: "... what academic bloggers can do well: actual commentary on scholarship".
What do others feel about Ethan's proposition? The ELS Blog editors continuously re-think the core premises underneath this blog and how it can (I hope) continue to add value for readers. Aside from perhaps a single handful of rare (and clearly warranted) departures this blog has steadfastly remained moored in all things germane to empirical legal scholarship, broadly defined. (And in the interest of full disclosure, this is a position I advocated at this blog's inception and continue to support.) However, other academic blogs--indeed, many other blogs, including some widely-read law blogs--conspicuously adopt a far wider stance and frequently delve into such areas as pop culture, personal narrative, political commentary, gossip, photography, etc. To be clear, there is no "right" answer to my general query, only perspectives.
Jeff Yates and Andrew Whitford (both of the University of Georgia) have started a new blog entitled Voir Dire, which should be of substantial interest to our readers as it has a law and social sciences focus. Jeff and Andrew do great work, so I am anxious to read the blog as it develops. I have already bookmarked it. Jeff described the mission of the blog in his first post:
Voir Dire - “to speak the truth.” VDB covers topics such as social science approaches to law and legal institutions, legal doctrine and legal policy implementation, and profession issues for academics. On occasion we dabble in the areas of pop culture, politics, and social issues, but for the most part we are not interested in becoming a pundit blog. VDB is designed as an online forum for the exchange of information on our core topics and research and teaching generally. Our aim is to advance discourse on these topics and highlight research and academic news that we find interesting.
UPDATE: I forgot to post a link in my initial post. The blog can be viewed here.
On the heels of our recently concluded Nance-Steinberg Article Selection Forum (Paul Caron aggregated all the posts here), I ran across this article by Robert Jarvis and Phyllis Coleman (Nova Southeastern Law), "Ranking Law Reviews by Author Prominence--Ten Years Later." It just appeared in the Law Library Review.
The authors published a similar study ten years ago, which ranked general interest student-edited law journals based on the prominence of its contributors (i.e., "drawing power") during the 1991 to 1995 time period. The new study is a replication based on 2001 to 2005 data (7573 discrete authors).
The methodology includes an unusual--and, no doubt some will argue, arbitrary--scale of author prominence. For example, if the President of the United States publishes remarks in your law review, it is worth 1,000 points. Here are other examples:
850, U.S. Senator
750, State Governor
725, U.S. Circuit Court Judge
625, Law professor at USN Top 25
525, Partner NLJ 250 firm or GC at Fortune 500 company
475, Law professor at USN Top 50
275, Law professor at USN Tier 3
250, Mayor or equivalent
225, Law professor at USN Tier 4
125, Community college professor
75, JD Student
Boy, am I glad I did not have to draw those lines! No surprise that journals at the top are Yale Law Journal (score = 553, which is remarkable since it includes student notes), Harvard Law Review (551), Columbia Law Review (543), etc.
What is surprising--at least to me--is the severe dropoff as one moves down the hierarchy. For example, Houston Law Review is a ranked #50 with a score of 337; #100 Penn State Law Review = 239; #150 Gonzaga Law Review = 196; #171 (dead last) Western State Univ. Law Review = 141. I hope deans of all the new law schools consider this data before agreeing to subside yet another law journal.
On behalf of the ELS Blog, I would like to thank Dylan Steinberg and Jason Nance for agreeing to let us critique their work on the Internet for all the academy (and world) to see. One of the reasons I pursued this forum is because the Nance-Steinberg study focused on a very provocative, easy-to-understand topic and used a less common methodology than standard multivariate regression. I hope Dylan's and Jason's innovative work and our subsequent discussion stretched the curiosity and ambitions of our readers.
One of the principal aims of our study was to begin a conversation about the law review selection process and provide some empirical data to make that conversation more meaningful. We are extremely pleased that to some extent that conversation has taken place over the last two days. We hope it will continue. Our study certainly raised many more questions than it answered.
I, also, would like to extend a sincere "thank you" to Bill Henderson for allowing us to participate in this online forum and to those who have posted comments on our work and its implications. Those comments have been extremely helpful and have given us much to think about as we continue to rework our manuscript.
Jason and I haven't had a chance to coordinate on this, so rather than trying to speak for both of us, I'll offer my thoughts and he can chime in with his own if he is so inclined.
First, I'd like to thank everyone for their thoughtful comments and critiques of our paper. You've given us a lot to think about and the next draft of the paper will be substantially better because of your input.
Second, there's been a lot of talk over the last two days about ways in which our data might be refined and improved. There is clearly a great deal of useful work that could be done in this area and I hope that our study will just be the beginning of a more well-informed discussion of the student-edited law review and how it fits into the overall schema of legal scholarship. I, for one, would be particularly interested to seem some empirical data, expanding on Christine's "armchair empiricism," that examines what law reviews actually publish. Somewhere between what editors say they consider and what they actually publish lies the truth about how these decisions get made.
Finally, I want to thank Bill Henderson for seeking us out and organizing this forum. It is, of course, always nice to have someone come from out of the blue and express interest in your work. But the forum has also been useful for us and, I hope, for everyone else as well.
One of the most interesting findings of Nance and Steinberg's paper is the weight that student law review editors give to an author's credentials in deciding whether to publish an article. However, I suspect that the effect of the author's credentials is more complicated than suggested by their paper.
I've heard from some editors at good, although not top tier, law reviews that they generally will not give publication offers to very elite authors because such authors are very unlikely to accept an offer of publication at their law reviews. This is reasonable behavior by the editors; the law review staff's time is better spent examining articles that they have a fair chance of getting to publish. This behavior also parallels that of authors who don't bother to submit some of their articles to top law reviews with atypical submission requirements because complying with the requirements isn't worth the effort in light of the very low probability that the law review will accept the paper. Thus, at least for non-elite law reviews, the relationship between an author's credentials and the probability of an article being accepted may not be linear. Higher author credentials may improve the chance of having the article accepted up to a point, but past that point, it may actually reduce the likelihood of acceptance.
Another area of inquiry that would have been especially interesting for the readers of this blog is whether law reviews use different criteria in reviewing empirical articles than they do for other types of articles. I have heard (again only anecdotally) that student-edited law reviews are increasingly interested in publishing empirical research. However, law students are probably less able to judge the quality of empirical work -- especially that involving statistical analyses -- than the quality of other articles. Thus, assuming law review editors are aware of their own limitations, they may be even more likely to rely on the author's credentials in deciding whether to publish empirical articles.
First, thanks to Bill and everyone for letting an empirical neophyte trespass on this blog for awhile. As a typical junior professor, I am a regular submitter to law reviews and have spent untold hours talking to colleagues about the submission process. However, lately these discussions have grown tiresome as I realize that as arbitrary and random as the process seems to be, unrelated anecdotes only make the process seem more random instead of creating some pattern of order. No one person's "N" is ever going to be big enough for that person to say with any authority "The best way to get your article accepted at top journals is to do X." However, these conversations seem always to dwindle to the point where Professor A says that doing one thing is important, but then Professor B will counter with the argument that the Professor A must be wrong because Professor B never does that one thing and always places well. And so on. So, I am pleased as punch to be moving from war stories to data. If nothing else, the student-run law review system produced the two authors of this study and perhaps honed their writing skills to the point that they were able to advance knowledge in this way. (I would like to hear about their own placement story, though, now that they are on the other side!)
However, as everyone else seems eager to point out, survey responses aren't the sort of data we need to conclusively dispel or confirm these anecdotes. (As an alumnus of the Lee Epstein/Andrew Martin Conducting Empirical Legal Scholarship workshops, I remember one of them saying that the best way to introduce bias into your project is to do a survey!) I won't belabor the point, but I think we all know that law review editors are smart enough to know what answers they should pick. (However, a survey of former Articles Editors might produce less self-conscious responses.) I think it's interesting that the "Tier 1" editors assigned lower importance scores to certain categories regarding author prestige and article topic, but those factors were still the most important in rank order. And, as the authors note, when asked virtually the same question in different form regarding author's place in the world, editors answered differently depending on wording. They don't care about how "notable" an author is, but they care whether the author is "highly influential in her respective field."
So, we know now that articles editors, even though they understand that they shouldn't say it's super important, admit that characteristics of the author are pretty much as important as characteristics of the work. So, that seems to jive with everyone's worst fears and conventional wisdom. And I think the objective data would support that. A few weeks ago, I updated some armchair empiricism on which authors get published in the Harvard Law Review (not a respondent to the survey). While my interest was in the gender breakdown of the authors published, I hasten to add that any casual legal scholar would recognize most of the names of those whose work was published in any given volume. I would suspect the same would be true of at least the top 5 journals.
So, what can I possibly add to this forum? Here are two small bits. First, I think the "hot topic" factor here is not treated in a precise way. Although the authors suggest that at least most journals do not look for hot topics, the factor was phrased as "The topic is one about which many articles are currently being written." Well, that sentence does not cry out for agreement, and one can think of many other ways in which the same could be worded. In fact, two factors that were ranked highly ("The article fills a gap in the literature" and "The topic would interest the general public") also describe hot topics. One can imagine that a legal topic that is currently in the news would spawn articles that both fill a (new) gap and that would interest the general public. Because of the timeliness, this topic may be the subject of many law review articles, but it's hard for an articles editor to know that at the beginning of the trend.
Second, one interesting aspect about the law review selection process is the agency problem. Editors are choosing articles that they will work on, perhaps personally. While the authors assume that these individual editors choose articles that will gain their law reviews citations and attention, in reality these editors will move on before this attention happens. So, editors may be more interested in choosing topics that they themselves are interested in and possibly choosing authors that they would like to get to know. I would be interested to know if any articles editors thought about the relationship between themselves and their authors. I definitely remember the authors that I worked with. And, I could definitely tell when I was getting an offer from an articles editor who really liked my topic.