Here. Three datasets that caught my eye:
- 2956 [Update] Immigrants Admitted to the United States, 1998
« July 2007 | Main | September 2007 »
Here. Three datasets that caught my eye:
Posted by Bill Henderson on 31 August 2007 at 07:49 AM | Permalink | Comments (0)
I've been spending a lot of time recently thinking about the "great divide" between legal academics and political scientists. Even today, with many avenues for communication between these fields -- like this very blog -- there is less than one might expect. Certainly, there are many in both fields who read and talk across the divide, and I'd expect that many readers of this blog are among those. But there remains the puzzle of why this is the exception and not the rule.
One possible reason has to do with substantial differences in the norms of scholarship in the different fields. Articles in the social sciences in general tend to be much shorter than law review articles, and more narrowly focused. They are also more likely to involve quantitative empirical analysis and to emphasize methodology. Law review articles, on the other hand, are notoriously long, but a single piece can explore a variety of theories and themes and is often focused on the broader implications of legal opinions, events, or analysis. As a result of these differences -- and leaving aside criticisms of the underlying content of the scholarship -- I think that legal academics often find social science pieces either opaque or -- due to their narrow focus -- uninteresting. On the flip side, of course, social scientists often find law review articles tedious and imprecise.
The increasing interest in empirical legal scholarship in the legal academy may help temper some of these tendencies, but there is a long way to go. Scholars in both legal academia and in the social sciences could go much further in trying to make their work understandable to people outside their own field. As but one example, I'll hold out the recent work of Thomas G. Hansford & James F. Spriggs II. Their book, The Politics of Precedent on the U.S. Supreme Court explores the ways that justices use precedent. The book is ambitious and quite interesting. Its argument and conclusions, however, are not written in ways designed to get the attention of legal scholars who are not themselves engaged in empirical work, despite the fact that at least some of those conclusions may be interesting to qualitative scholars of the Supreme Court. In a thumbnail review in the Law Library Journal, for example, the reviewer dismissed the book as "better suited for a political science class than a legal audience..." The book itself contains a lot of sophisticated statistical analysis that is not translated for those who might not be familiar or comfortable with that methodology, and it assumes familiarity with political science norms and resources (the Spaeth Supreme Court databases, for example). All of this is unfortunate because it makes it less likely that the work will be read (much less understood) by most legal scholars. And to reiterate the point of my italics above -- this is especially unfortunate to the extent that the Hansford and Spriggs' findings would be interesting to the many law professors who are not themselves empirical legal scholars, but who study the Supreme Court from other perspectives.
So what to do? More co-authorship would be helpful. More self-consciousness about the ways in which the norms of one field may exclude or deter readers from another. I'd be particularly interested in seeing different presentations of the same work in different settings. Hansford and Spriggs, for example, might consider a law review article that presents their work to a legal academic audience. (Of course, they may already be doing this.) With apologies to Hansford and Spriggs, I hope that this blog post helps nudge social scientists and legal academics alike towards learning to speak a bit of each others' languages.
Posted by CarolynShapiro on 28 August 2007 at 04:11 PM in Scholarship | Permalink | Comments (2)
For anyone interested, a tentative CELS conference schedule is available at the CELS Conference website (this year hosted at NYU and found here). Individual paper discussant assignments will be made, and posted, promptly.
Posted by Michael Heise on 28 August 2007 at 03:20 PM in Conferences | Permalink | Comments (0) | TrackBack (0)
Over at Conglomerate, David Zaring (formerly a W&L law professor, now at Wharton) expands upon Greg Mankiw's observations on the sociology of economics, including musing on why economists are (or come off as) so great. Here is an excerpt:
[T]here’s no question that economists are well trained social scientists, with plenty of math and a confident professional ethos. Economists are also good in workshops – and I occasionally think that the old law professor credo of “have a theory about anything and speak in full paragraphs rich with prosody” is less popular in the interdisciplinary seminar rooms of our universities now that economists are coming too, and biting into the confounders and omitted variables. ...
Economists may crush all comers in seminars, Mankiw thinks, because "economics may attract people with a particular set of personality attributes, and perhaps these attributes are not the same set of attributes you might choose for your next dinner party.”
Mankiw's original post also contains this nugget:
[T]he set of advocates who are economists is quite small (I don't know if this reflects treatment or selection). In general, economists are more likely to make up their minds about whether a particular policy works based on theory or data. They may have priors, but not the the sort of "do-gooder"priors that advocates have. One of the reasons that economists are so aggressive with the non-economists is that we want to expose all the priors immediately.
I love this statement, at least as an aspiration. Of course, as noted in one recent critique of economists (see "Economics is a 'Triumph of Theory Over Fact'"), it is also possible to be so in love with your theory that you dismiss any need to test its most basic assumptions.
Posted by Bill Henderson on 27 August 2007 at 06:42 AM | Permalink | Comments (2)
Today's WSJ Op-Ed page includes an essay (regrettably, a direct link is not possible w/out a subscription) that relates to a recent ELS Blog Forum featuring work by Richard Sander (UCLA) and responses to that work by Richard Lempert (Michigan).
Posted by Michael Heise on 24 August 2007 at 07:53 AM in Current Affairs, Guest Bloggers | Permalink | Comments (1) | TrackBack (0)
This year, I was fortunate to an organizer of the 6th Annual ISBA Solo & Small Firm Conference. In wrapping up the conference and preparing for next year, one of our first tasks was to review the speaker evaluations generated by a SurveyMonkey.com questionnaire--and note, I was one of the speakers.
Although there is a controversy within the academy on whether teacher evaluations can be trusted, my colleagues at the ISBA had no problem using these scores to make future programming decisions. Note that the organizers attended a large proportion of the sessions (and a few of us were also presenters); the evaluations seems to confirm our own impressions of speaker quality. There were no surprises. (Disclosure: my own evaluations were good but not spectacular.)
I got the impression that the ISBA approached the situation in the same way as any business trying to improve its product: Each speaker got a copy of his or her scores plus excerpts from the narrative comments; many will be invited back, but a few will not. Frankly, after reflecting on this experience, I think some of the academic debates on the value of student evaluations (see, e.g., this bibliography) would be ridiculed by practicing lawyers who are used to delivering value or losing a client. Virtually all lawyers involved in the conference would agree that the consensus view of their colleagues is what matters. This is a very pragmatic approach that is hard to dismiss.
If the judgment of lawyers can be trusted, what about law students? In an earlier post, I defended the validity of law school teaching evaluations. A recent article by Deborah Jones Merritt, "Bias, the Brain, and Student Evaluations," has the right idea: Refine the teacher evaluation process and improve its validity. But don't make the leap that the quality of legal instruction cannot be measured.
Posted by Bill Henderson on 21 August 2007 at 02:06 PM | Permalink | Comments (2)
This past May I posted about a $51 million gift to the Marquette University Law School toward construction of a new facility. Now, while the ELS Blog rarely posts off-topic news, this additional news promises to be another important moment for the Marquette University Law School.
Joseph J. Zilber, Milwaukee philanthropist, real estate developer and Chairman of the Board of Zilber Ltd., a real estate holding company, announced a $30 million gift to the Marquette University Law School. The University press release is here and here. Mr. Zilber, who graduated from the Marquette University Law School in 1941, has directed that $5 million of the gift be used to support construction of the Law School and that $25 million be associated with law student scholarships.
Posted by Michael Heise on 21 August 2007 at 11:53 AM in Announcements | Permalink | Comments (1)
In something of a throw-away line in a recent post describing the birth of a new blog over at PrawfsBlawg Ethan Leib remarks: "... what academic bloggers can do well: actual commentary on scholarship".
What do others feel about Ethan's proposition? The ELS Blog editors continuously re-think the core premises underneath this blog and how it can (I hope) continue to add value for readers. Aside from perhaps a single handful of rare (and clearly warranted) departures this blog has steadfastly remained moored in all things germane to empirical legal scholarship, broadly defined. (And in the interest of full disclosure, this is a position I advocated at this blog's inception and continue to support.) However, other academic blogs--indeed, many other blogs, including some widely-read law blogs--conspicuously adopt a far wider stance and frequently delve into such areas as pop culture, personal narrative, political commentary, gossip, photography, etc. To be clear, there is no "right" answer to my general query, only perspectives.
Posted by Michael Heise on 21 August 2007 at 08:02 AM in About ELS blog | Permalink | Comments (6) | TrackBack (0)
Jeff Yates and Andrew Whitford (both of the University of Georgia) have started a new blog entitled Voir Dire, which should be of substantial interest to our readers as it has a law and social sciences focus. Jeff and Andrew do great work, so I am anxious to read the blog as it develops. I have already bookmarked it. Jeff described the mission of the blog in his first post:
Voir Dire - “to speak the truth.” VDB covers topics such as social science approaches to law and legal institutions, legal doctrine and legal policy implementation, and profession issues for academics. On occasion we dabble in the areas of pop culture, politics, and social issues, but for the most part we are not interested in becoming a pundit blog. VDB is designed as an online forum for the exchange of information on our core topics and research and teaching generally. Our aim is to advance discourse on these topics and highlight research and academic news that we find interesting.
UPDATE: I forgot to post a link in my initial post. The blog can be viewed here.
Posted by David Stras on 20 August 2007 at 03:46 PM | Permalink | Comments (7)
On the heels of our recently concluded Nance-Steinberg Article Selection Forum (Paul Caron aggregated all the posts here), I ran across this article by Robert Jarvis and Phyllis Coleman (Nova Southeastern Law), "Ranking Law Reviews by Author Prominence--Ten Years Later." It just appeared in the Law Library Review.
The authors published a similar study ten years ago, which ranked general interest student-edited law journals based on the prominence of its contributors (i.e., "drawing power") during the 1991 to 1995 time period. The new study is a replication based on 2001 to 2005 data (7573 discrete authors).
The methodology includes an unusual--and, no doubt some will argue, arbitrary--scale of author prominence. For example, if the President of the United States publishes remarks in your law review, it is worth 1,000 points. Here are other examples:
Boy, am I glad I did not have to draw those lines! No surprise that journals at the top are Yale Law Journal (score = 553, which is remarkable since it includes student notes), Harvard Law Review (551), Columbia Law Review (543), etc.
What is surprising--at least to me--is the severe dropoff as one moves down the hierarchy. For example, Houston Law Review is a ranked #50 with a score of 337; #100 Penn State Law Review = 239; #150 Gonzaga Law Review = 196; #171 (dead last) Western State Univ. Law Review = 141. I hope deans of all the new law schools consider this data before agreeing to subside yet another law journal.
Posted by Bill Henderson on 17 August 2007 at 08:37 PM in Articles | Permalink | Comments (5)
On behalf of the ELS Blog, I would like to thank Dylan Steinberg and Jason Nance for agreeing to let us critique their work on the Internet for all the academy (and world) to see. One of the reasons I pursued this forum is because the Nance-Steinberg study focused on a very provocative, easy-to-understand topic and used a less common methodology than standard multivariate regression. I hope Dylan's and Jason's innovative work and our subsequent discussion stretched the curiosity and ambitions of our readers.
Posted by Bill Henderson on 15 August 2007 at 08:31 PM | Permalink | Comments (2)
One of the principal aims of our study was to begin a conversation about the law review selection process and provide some empirical data to make that conversation more meaningful. We are extremely pleased that to some extent that conversation has taken place over the last two days. We hope it will continue. Our study certainly raised many more questions than it answered.
I, also, would like to extend a sincere "thank you" to Bill Henderson for allowing us to participate in this online forum and to those who have posted comments on our work and its implications. Those comments have been extremely helpful and have given us much to think about as we continue to rework our manuscript.
Posted by Jason Nance on 15 August 2007 at 04:16 PM | Permalink | Comments (0)
Jason and I haven't had a chance to coordinate on this, so rather than trying to speak for both of us, I'll offer my thoughts and he can chime in with his own if he is so inclined.
First, I'd like to thank everyone for their thoughtful comments and critiques of our paper. You've given us a lot to think about and the next draft of the paper will be substantially better because of your input.
Second, there's been a lot of talk over the last two days about ways in which our data might be refined and improved. There is clearly a great deal of useful work that could be done in this area and I hope that our study will just be the beginning of a more well-informed discussion of the student-edited law review and how it fits into the overall schema of legal scholarship. I, for one, would be particularly interested to seem some empirical data, expanding on Christine's "armchair empiricism," that examines what law reviews actually publish. Somewhere between what editors say they consider and what they actually publish lies the truth about how these decisions get made.
Finally, I want to thank Bill Henderson for seeking us out and organizing this forum. It is, of course, always nice to have someone come from out of the blue and express interest in your work. But the forum has also been useful for us and, I hope, for everyone else as well.
Posted by Dylan Steinberg on 15 August 2007 at 03:28 PM in Blog Forum | Permalink | Comments (8)
One of the most interesting findings of Nance and Steinberg's paper is the weight that student law review editors give to an author's credentials in deciding whether to publish an article. However, I suspect that the effect of the author's credentials is more complicated than suggested by their paper.
I've heard from some editors at good, although not top tier, law reviews that they generally will not give publication offers to very elite authors because such authors are very unlikely to accept an offer of publication at their law reviews. This is reasonable behavior by the editors; the law review staff's time is better spent examining articles that they have a fair chance of getting to publish. This behavior also parallels that of authors who don't bother to submit some of their articles to top law reviews with atypical submission requirements because complying with the requirements isn't worth the effort in light of the very low probability that the law review will accept the paper. Thus, at least for non-elite law reviews, the relationship between an author's credentials and the probability of an article being accepted may not be linear. Higher author credentials may improve the chance of having the article accepted up to a point, but past that point, it may actually reduce the likelihood of acceptance.
Another area of inquiry that would have been especially interesting for the readers of this blog is whether law reviews use different criteria in reviewing empirical articles than they do for other types of articles. I have heard (again only anecdotally) that student-edited law reviews are increasingly interested in publishing empirical research. However, law students are probably less able to judge the quality of empirical work -- especially that involving statistical analyses -- than the quality of other articles. Thus, assuming law review editors are aware of their own limitations, they may be even more likely to rely on the author's credentials in deciding whether to publish empirical articles.
Posted by Ahmed Taha on 15 August 2007 at 01:01 PM | Permalink | Comments (1)
First, thanks to Bill and everyone for letting an empirical neophyte trespass on this blog for awhile. As a typical junior professor, I am a regular submitter to law reviews and have spent untold hours talking to colleagues about the submission process. However, lately these discussions have grown tiresome as I realize that as arbitrary and random as the process seems to be, unrelated anecdotes only make the process seem more random instead of creating some pattern of order. No one person's "N" is ever going to be big enough for that person to say with any authority "The best way to get your article accepted at top journals is to do X." However, these conversations seem always to dwindle to the point where Professor A says that doing one thing is important, but then Professor B will counter with the argument that the Professor A must be wrong because Professor B never does that one thing and always places well. And so on. So, I am pleased as punch to be moving from war stories to data. If nothing else, the student-run law review system produced the two authors of this study and perhaps honed their writing skills to the point that they were able to advance knowledge in this way. (I would like to hear about their own placement story, though, now that they are on the other side!)
However, as everyone else seems eager to point out, survey responses aren't the sort of data we need to conclusively dispel or confirm these anecdotes. (As an alumnus of the Lee Epstein/Andrew Martin Conducting Empirical Legal Scholarship workshops, I remember one of them saying that the best way to introduce bias into your project is to do a survey!) I won't belabor the point, but I think we all know that law review editors are smart enough to know what answers they should pick. (However, a survey of former Articles Editors might produce less self-conscious responses.) I think it's interesting that the "Tier 1" editors assigned lower importance scores to certain categories regarding author prestige and article topic, but those factors were still the most important in rank order. And, as the authors note, when asked virtually the same question in different form regarding author's place in the world, editors answered differently depending on wording. They don't care about how "notable" an author is, but they care whether the author is "highly influential in her respective field."
So, we know now that articles editors, even though they understand that they shouldn't say it's super important, admit that characteristics of the author are pretty much as important as characteristics of the work. So, that seems to jive with everyone's worst fears and conventional wisdom. And I think the objective data would support that. A few weeks ago, I updated some armchair empiricism on which authors get published in the Harvard Law Review (not a respondent to the survey). While my interest was in the gender breakdown of the authors published, I hasten to add that any casual legal scholar would recognize most of the names of those whose work was published in any given volume. I would suspect the same would be true of at least the top 5 journals.
So, what can I possibly add to this forum? Here are two small bits. First, I think the "hot topic" factor here is not treated in a precise way. Although the authors suggest that at least most journals do not look for hot topics, the factor was phrased as "The topic is one about which many articles are currently being written." Well, that sentence does not cry out for agreement, and one can think of many other ways in which the same could be worded. In fact, two factors that were ranked highly ("The article fills a gap in the literature" and "The topic would interest the general public") also describe hot topics. One can imagine that a legal topic that is currently in the news would spawn articles that both fill a (new) gap and that would interest the general public. Because of the timeliness, this topic may be the subject of many law review articles, but it's hard for an articles editor to know that at the beginning of the trend.
Second, one interesting aspect about the law review selection process is the agency problem. Editors are choosing articles that they will work on, perhaps personally. While the authors assume that these individual editors choose articles that will gain their law reviews citations and attention, in reality these editors will move on before this attention happens. So, editors may be more interested in choosing topics that they themselves are interested in and possibly choosing authors that they would like to get to know. I would be interested to know if any articles editors thought about the relationship between themselves and their authors. I definitely remember the authors that I worked with. And, I could definitely tell when I was getting an offer from an articles editor who really liked my topic.
Posted by Christine Hurt on 15 August 2007 at 08:40 AM in Blog Forum | Permalink | Comments (5)
Let me start by congratulating Nance and Steinberg for a
study that is well done and has already sparked quite a bit of discussion and
controversy. I share some of the concerns voiced earlier about various
survey biases and some of the methodological choices, but I think overall it is
a remarkably strong first effort, and a great "first cut" look at a
process that likely needs to be studied from several angles before it can be
fully understood. In particular, it is definitely worth thinking about
the gap between what people (including law review editors) say they do and what
they actually do.
Nevertheless, one of the interesting things about the study's findings is that
many of the traditional law professor complaints about law review selection are
born out explicitly in the survey responses. The most obvious are the
various responses that highlight where an author works or where an author has
previously published, but I think the negative responses to ease of editing and
the adequacy of the footnotes are also the sorts of criteria that make law
review authors wince.
The generic criticisms I hear from law professors about student edited law
reviews, however, often strike me as somewhat ironic. Law schools
are among the most hierarchical and status-obsessed educational institutions in
Bill makes a good list of the strengths of student edited law reviews, but I'll add two more, somewhat subterranean, ways that student-edited law reviews serve the interests of law faculty. First, there are so many student-edited law reviews that it is not an exaggeration to say that virtually anything a law professor writes that is in English and makes some vague sense can and will be published. This is an enormous comparative advantage for a law faculty member over other disciplines, since a law professor can remain "productive" regardless of whether their work is relevant or even particularly good.
Second, having students edit most of the work means that law professors do not have to. Being a reviewer for a peer-edited journal (let alone being an editor) takes a great deal of time, and is in many ways a relatively thankless pain. The fact that student editors do the bulk of this work is a major benefit for law faculties.
Lastly, I think Bill is on the right track when he asks about the purpose of law faculty scholarship. Similarly, it is worth asking about the purposes of having student-edited law reviews at all. I assume the primary institutional purpose is educational: the students learn a lot by reading and editing faculty scholarship, as well as writing their own notes or comments. Keeping this purpose in mind helps explain why student editors might behave as they do, and should give us pause before criticizing too harshly. Given the educational mission of the law reviews and the challenges students face in selecting and editing faculty work I think they do a good job overall.
Posted by Benjamin Barton on 15 August 2007 at 08:22 AM | Permalink | Comments (5)
One of unwritten questions raised by the Nance-Steinberg study is whether the institution of the student-edited law review remains the appropriate outlet for the "best" legal scholarship, including interdisciplinary and empirical work.
Nance and Steinberg, for example, cite to Judge Posner's remark that the shift away from doctrinal scholarship has left Articles Editors floundering in a "scholarly enterprise vast reaches of which they could barely comprehend." According to this view, student-edited journals should focus on doctrinal work while the more sophisticated stuff should be reserved for faculty-run journals.
This argument is quite popular in some law professor circles. But I think it begs a more fundamental question: What is the purpose of legal scholarship? Perhaps this is a better way to phrase the question: What is the purpose of scholarship produced by law faculty?
A lot of articles published in prestigious faculty-edited law journals are impenetrable, using formal mathematical modeling that only a specialist could understand. There is no pretense that the journal is engaged in dialogue with a larger legal community. Under this paradigm, scholarly "success" is a remarkably insular conversation among elite academics. Why should a law professor be paid to write articles that graduates of his or her law school cannot understand? What is the justification for the disconnect between the classroom and the faculty-edited law journal?
Although the current student-edited system has drawbacks, it also has some huge advantages:
That said, there is ample room for outlets like the JELS, which has become an important vehicle for setting standards for (intelligible) empirical legal work, including the advancement of methods.
But regardless of publication outlet, we still need to answer the question, "what is the purpose of our scholarship (beyond our own professional advancement)"? If law students cannot evaluate the merits of our articles that bear on the law, perhaps the problem is primarily one of curriculum (and the institutional incentives that perpetuate an antiquated model) rather than publication outlet.
Posted by Bill Henderson on 15 August 2007 at 01:00 AM in Blog Forum | Permalink | Comments (13) | TrackBack (0)
My Ph.D. thesis director once advised me that "if it's worth doing, it's worth doing badly." His point was not to make the perfect the enemy of the good, particularly when conducting truly original research. So it's important to preface any critique of their work by acknowledging that, whatever flaws their study may have, Nance and Steinberg have done a great service to the legal academy by shedding some empirical light on the question of law review publication.
As is the case with any empirical paper, methodological criticisms of their work are among the easiest to offer. One could question their decision to use conventional factor analysis with ordinally-measured response variables (particularly when better techniques exist), or their extensive use of tables when figures (such as the one above; click on chart to enlarge) typically do a much better job of conveying complex statistical results.
My biggest concern, however (and one prefaced by Michael's comment to Bill's first Forum post) is the effect of social desirability bias (hereinafter SDB) on the study's findings. SDB refers to survey respondents' tendency to answer surveys in manners they think are socially (or, here, professionally) desirable or expected of them; it is a well-known and commonly-observed phenomenon in survey research (a recent paper with a list of current references is Streb et al.). I'd contend that the presence and effect of such bias can explain both their intuitive findings as well as some of the more unexpected ones.
Articles editors (AEs) undoubtedly are interested in growing the prestige of their journal, and in minimizing their editorial workload. They are also, however, socialized into the law review culture; they understand that law reviews, as forums for scholarly work, should publish the "best" (most original, creative, important, well-reasoned, persuasive) scholarship they can. As AEs, their professional role is to select such work for publication, and to do so in a way that doesn't systematically disfavor authors or work on the basis of other (putatively irrelevant) criteria. SDB suggests that AE's survey responses will likely reflect their desire to be seen as conforming to that role.
Consider Nance and Steinberg's rather odd finding that, while "Author Prestige" is among the most influential of their constructs, "Notability of the Author" ranks dead last in the rankings of publication criteria. The phrasing of the authors' 56 "influence" questions is such that none is dispositive; each can influence the publication process without making or breaking a given paper. In contrast, asking AEs to rank order the seven publication criteria forces a zero-sum choice: for one criterion to be ranked higher, another must be ranked lower. That, combined with the relatively small number of items to be ranked and the presence of SDB effects, makes it difficult for an AE to place "Notability" high in the rankings.
A similar dynamic might explain the relative weakness of "negative" author traits: while AEs can be forgiven for privileging work by high-prestige authors, it is considered much less acceptable to disadvantage low-prestige ones. Finally, in our post-Grutter world, it is likely that most AEs almost reflexively responded "no influence" to questions regarding the effects of author race and gender.
But while SDB is a potentially serious problem, it is by no means insurmountable. A standard way of assessing the presence of SDB is to compare survey responses with actual behavior; as Michael suggested in his earlier comment, the obvious means of doing this would be to analyze data on actual submissions and acceptances. Barring that, SDB can be reduced in surveys through anonymity; survey respondents who are assured that their answers will be anonymous are typically less affected by SDB than those who can be identified.
Posted by Christopher Zorn on 14 August 2007 at 02:51 PM in Blog Forum | Permalink | Comments (7) | TrackBack (0)
This study grows out of our own experiences as Articles Editors of the University of Pennsylvania Law Review. The Articles Office spent many of our early meetings talking about what criteria the law review ought to be using to select articles. In the wake of those discussions, we were curious about the degree to which our peer journals would make use of the same criteria and weigh them the same way. This study started as a way to answer that question.
Our first startling discovery was our fellow editors’ eagerness to talk about these issues. We sent out an e-mail to every student-edited legal journal for which we could find an e-mail address and asked one or more of the editors responsible for selecting articles at that publication to fill out our survey instrument. Though it’s impossible to know exactly how many of those e-mail addresses were valid, we believe that our e-mail reached between 375 and 400 journals. The 191 responses from 163 journals were far more than we expected and gave us a data set that was well-suited to productive analysis. We chose to focus on factor analysis because we believed that the specific criteria that were amenable to survey questions were properly understood as components of broader selection criteria. Our results bore that out.
While there is obvious interest within the academic community in our simply reporting these results, we think our data also highlight some interesting aspects of the law review process that have been little discussed. The most notable is the degree to which journals act as independent agents rather than as neutral arbiters of quality scholarship. (We have consciously avoided taking on the question of whether students are capable of filling a role as neutral arbiters, but commentators have frequently argued that they are not.) The primary measure of journal prestige (and the one we used in our analysis) is frequency of citation. If the primary goal of editors is to increase the notability and prestige of their own journal – and our results can be read as indicating that it is – the best way to do that is to publish articles that will be read and cited frequently. While that goal may correlate to some degree with an abstract notion of academic excellence or importance, it also draws on a number of other factors such as author notoriety or prestige and the frequency with which related topics are addressed in legal academic writing. Thus, it is possible to explain editors’ tendency to gravitate towards articles by well-known authors at prestigious institutions (and our survey confirms that a strong tendency to do this is present across the board) or to articles in certain subject areas (most notably constitutional law) not as a product of their inability to recognize academic excellence but as the result of rational desire to increase the prestige of their own publications. As Bill mentioned in his introduction, this is a job that student editors can likely do pretty well.
When undertaking this study, we were struck by the relative dearth of empirical work in this area. Our hope is that, by providing some robust data about what law review editors actually do, we can move the debate about what they should do away from the anecdotal context of particular authors’ horror stories to a broader context based on a deeper understanding of how the process actually works today.
Posted by Dylan Steinberg on 14 August 2007 at 10:01 AM in Blog Forum | Permalink | Comments (6) | TrackBack (0)
Periodically law professors convene forums
to trash law reviews. The most virulent words are usually heaped upon the
student editors who run these journals. For example, in a symposium published by the
Within the article selection process, more specific criticism includes the student editors’ (perceived) fixation with copious footnotes, excessive literature reviews, trendy topics, or an author’s institutional affiliation or prior publications (as opposed to the one in front of them). These last items are of particular concern for law professors because it stacks the deck against an objective evaluation of their work. And placement has significant collateral effects on pay, promotion, and the lateral market.
Fortunately, the Nance-Steinberg study opens the black box of article selection. In my estimation, it reveals a reasonably fair and objective process in which original, persuasive, and polished articles have at least a fighting chance of getting a good placement. Sure, the author’s letterhead matters. But these results suggest that this bias is actually less true at more elite journals—and that finding itself may be a bit of a mythbuster. Moreover, the study actually has some suggestions that could, at the margins, help with an article’s placement.
To get all of our readers onto the same page, after the jump I will briefly summarize the sample, methodology, and key findings of the Nance-Steinberg study.
Continue reading "Forum Post #1: Opening the Black Box of Law Review Selection" »
Posted by Bill Henderson on 14 August 2007 at 09:18 AM in Blog Forum | Permalink | Comments (12) | TrackBack (0)
Over the next couple of weeks, the law journal submission season will enter full swing. Because there are only a limited number of publication slots at elite law journals—and there is a perception, if not a reality, that a strong placement affects pay, promotion, and lateral offers—this positional competition creates severe angst among law professors that often ends in disappointment.
To help elucidate the inner-workings of this
process, the ELS Blog will be hosting a forum today and tomorrow (August 14-15)
on a recent empirical study by Jason Nance and Dylan Steinberg, “The Law Review
Article Selection Process: Results from a National Study.” The authors are 2006 graduates of the
University of Pennsylvania Law School, where they served as Articles Editors on
the
This forum will also include commentary on the Nance-Steinberg study by Benjamin Barton (Tennessee Law), Christine Hurt (Illinois Law and The Conglomerate), Ahmed Taha (Wake Forest Law), and our own Chris Zorn. I will begin the forum with a short summary of the Nance-Steinberg study.
Posted by Bill Henderson on 14 August 2007 at 08:31 AM in Blog Forum | Permalink | Comments (6)
New ICPSR data releases, here. One of the datasets that caught my eye was American Terrorism Study, 1980-2002.
Posted by Bill Henderson on 13 August 2007 at 12:20 PM in Data | Permalink | Comments (0)
Boyd, Epstein and Martin have written Untangling the Causal Effects of Sex on Judging. The Abstract:
We enter the debate over the role of sex in judging by addressing the two predominant empirical questions it raises: whether male and female judges decide cases distinctly (individual effects) and whether the presence of a female judge on a panel causes her male colleagues to behave differently (panel effects). We do not, however, rely exclusively on the predominant statistical models - variants of standard regression analysis - to address them. Because these tools alone are ill-suited to the task at hand, we deploy a more appropriate methodology - non-parametric matching - which follows from a formal framework for causal inference.
Applying matching methods to sex discrimination suits resolved in the federal circuits between 1995 and 2002 yields two clear results. First, we observe substantial individual effects: The likelihood of a judge deciding in favor of the party alleging discrimination decreases by about 10 percentage points when the judge is a male. Likewise, we find that men are significantly more likely to rule in favor of the rights litigant when a woman serves on the panel. Both effects are so persistent and consistent that they may come as a surprise even to those scholars who have long posited the existence of gendered judging.
Posted by Michael Heise on 11 August 2007 at 06:50 PM in Scholarship | Permalink | Comments (0)
Over at Conglomerate, Gordon Smith briefly discusses and links to a blog entry by Marc Andreessen that takes up the query: "Is entrepreneurship more like poetry, pure mathematics, and theoretical physics -- which exhibit a peak age in one's late 20s or early 30s -- or novel writing, history, philosophy, medicine, and general scholarship -- which exhibit a peak age in one's late 40s or early 50s?" The Andreessen post includes snippets from a 1988 paper by Dean Simonton (UC Davis, Psych). Evidently, Prof. Simontron has conducted "extensive research on age and creativity across many other fields, including science, literature, music, chess, film, politics, and military combat."
Replace the term "entrepreneurship" with "legal scholarship" and you'll see why such research might interest legal scholars (of all ages).
Posted by Michael Heise on 09 August 2007 at 08:22 AM in Current Affairs | Permalink | Comments (2) | TrackBack (0)
Johnson and Fang have written Will of the Minority: Rule of Four on the United States Supreme Court. The Abstract:
The Rule of 4 on the U.S. Supreme Court is one of the only positive powers held by a minority coalition in our federal government (other minority powers are largely negative, such as the filibuster). In this paper we provide a formal model that explores the conditions under which we would expect the Rule of 4 to be invoked by a minority of justices on the Court. We also model the conditions under which such a vote under this rule will be successful. These models lead to explicit hypotheses about each part of the Court's agenda setting process. Using data from 1953 to 1985 we then empirically test our hypotheses. The results indicate that when the pivotal certiorari justice has preferences close to the status quo and when this pivot is ideologically close to the Court median she is more likely to vote to grant certiorari. Finally, our results indicate that such a vote can and is successful when the median justice is ideologically close to the status quo.
Posted by Michael Heise on 08 August 2007 at 02:06 PM in Scholarship | Permalink | Comments (0)
In my zeal to talk about Professor Farnsworth's paper, I did not remember that it has already been posted previously by Jason on the blog. Rather than remove the post entirely, I would like to put in a plug for downloading the paper, see here, and to refer readers to Jason's prior post on the issue, see here. Thanks for your patience, and sorry to all those readers who had a feeling of deja vu.
Posted by David Stras on 08 August 2007 at 09:34 AM | Permalink | Comments (0)
I just returned from my annual sojourn teaching at the ICPSR's Summer Program in Quantitative Methods. Most of the courses there -- which range in technical difficulty from Introduction to Computing and basic math to Bayesian methods, LISREL, and advanced game theory -- are four-week courses, populated mostly by graduate students from Ph.D. programs in sociology, political science, and other similar fields.
One thing I've always wondered, though, was how the program might attract more individuals who are later in their careers. A tenure-track professor in any discipline can hardly afford to give up a month of their life to learn a statistical technique they may only use for a single project. Yet, it's often exactly such individuals who could most benefit from training of this sort.
The ICPSR has begun moving in this direction in recent years, offering greater numbers of three- and five-day workshops and holding courses outside of Ann Arbor (which is, however pleasant in the summertime, a bit off the beaten path). But they (we) could obviously do more. So I ask: What would you look for in a summer (or winter/spring break) course in methods? What characteristics (pedagogical, logistical, whatever) would make you more likely to enroll in such a course?
Posted by Christopher Zorn on 07 August 2007 at 02:48 PM in Methodology | Permalink | Comments (6) | TrackBack (0)
While not geared towards legal scholars or legal scholarship per se, Andrew Gelman (Columbia) edits a blog I've found helpful on more than one occasion. Some of the "nuts-and-bolts" coding, software, and graphical display posts, as well as the multi-disciplinary approach, are quite useful.
Posted by Michael Heise on 07 August 2007 at 01:13 PM | Permalink | Comments (0) | TrackBack (0)
TO: Constitutional Law Professors
Posted by Michael Heise on 06 August 2007 at 10:17 AM in Announcements | Permalink | Comments (0)
Today, I have the privilege to being on a panel at the Southeastern Association of Law Schools annual meeting, entitled "The 'Ins' and 'Outs' of Empirical Research." The panel was put together by Benjamin Barton (Tennessee), and I am splitting time with Stefanie Lindquist (Vanderbilt) and Ahmed Taha (Wake Forest).
As I was preparing for my talk, I realized that I was the only presenter who did not have a PhD -- and I am going last on the program. I plan on stealing the show with my topic, "Empiricism for the Non-PhD Law Professor"-- something that is likely to resonant with 90 percent of the audience. I am building on some of the themes set forth in my April 2006 post, ELS on the Cheap (i.e., without Graduate School).
The SEALS sessions tend to be large (even discounting for my participation on the program), so I am posting two handouts here in the event we run short:
Posted by Bill Henderson on 03 August 2007 at 12:05 AM in Conferences | Permalink | Comments (0)
Ward Farnsworth (Boston University Law) has posted The Use and Limits of Martin-Quinn Scores to Assess Supreme Court Justices, with Special Attention to the Problem of Ideological Drift. The Abstract:
This paper explains and examines the use of Martin-Quinn scores to assess the behavior of Supreme Court Justices. It is a reply to a recent paper by Lee Epstein, Jeffrey Segal, Andrew Martin, and Kevin Quinn which claims that the policy preferences of most Justices change during their careers; the authors of that paper suggest that this should cause Presidents to reconsider the use of nominations to try to change the direction of the Court. The authors base their findings on changes in the Justices' Martin-Quinn scores, but the meaning of those scores has not yet been fully explained in plain English. The present article attempts such an explanation. It discusses the features of judicial behavior that the Martin-Quinn method accounts for and does not account for, the limitations of the method, and some questions about the method that remain to be answered.
The limits of Martin-Quinn scores raise some doubts about the authors' conclusions. Martin-Quinn scores are generated by simply observing patterns of coalition voting among the justices without paying any attention to what the cases are about. The authors assume that all voting is ideological, so any change in the patterns of the coalitions the Justices form is taken to show changes in the Justices' ideologies. There are various reasons to question this chain of reasoning. The most important is that the authors' model treats all cases as equally important and revealing. So if a Justice starts to vote a little to the left of where he formerly did (relative to his colleagues) in any area of law, this may cause a change in how the Martin-Quinn model views his entire ideology - even if his voting has been consistent in most areas of great public interest. So when the authors find statistical changes in the behavior of Justices, those changes may not (and in some cases do not appear in fact) to amount to shifts that would have mattered to the Presidents who appointed those Justices in the first place.
Further, the authors write as though predictability were monolithic: either the behavior of Justices can be predicted or it can't be. They don't take into account the possibility that risks of ideological drift are greater among some nominees than others, and that Presidents and others can foresee this. Nominees who have done extensive service in the political branches of a party (such as Rehnquist, Scalia, Thomas, Roberts, and Alito) are more reliable bets than nominees without such political experience (such as Stevens, Souter, and Kennedy). Nominees of the latter kind often are chosen by Presidents precisely to avoid tough confirmation fights; the risk that they will drift ideologically is perceived by everyone from the start, and is the reason why such nominees are not opposed as vigorously. Most cases where Justices have changed in ways that would have disappointed their nominators appear to involve nominees who were understood to be in the relatively risky group from the start. So while the authors' findings are interesting, they don't yet seem to call for much revision in the thinking of those who choose Supreme Court nominees or argue about them.
Posted by Michael Heise on 02 August 2007 at 01:37 PM in Scholarship | Permalink | Comments (0)
I hope that my co-bloggers and the readers of this blog will excuse me for this non-law related post: it is my first and I hope it will be my last. As many of you no doubt know by now, the bridge over Highway 35W that, among other things, separates the East and West Banks of the University of Minnesota collapsed last night. It is a bridge that, like my colleagues, I have crossed on a regular basis literally hundreds of time. So far, no one I know was personally touched by this tragedy, but I fear that may change today with more than twenty people unaccounted for at this time. I hope and pray that my colleagues and friends as well as their families are safe, and my heart and prayers goes out to the families that were directly impacted by this profound tragedy. I would also like to thank colleagues, friends, and family for the outpouring of concern for our safety.
Posted by David Stras on 02 August 2007 at 07:33 AM | Permalink | Comments (0)
Just announced yesterday, here. Some of the datasets that look useful to an ELS researcher include:
Posted by Bill Henderson on 01 August 2007 at 08:21 AM in Data | Permalink | Comments (0)
Recent Comments