Periodically law professors convene forums
to trash law reviews. The most virulent words are usually heaped upon the
student editors who run these journals. For example, in a symposium published by the
Within the article selection process, more specific criticism includes the student editors’ (perceived) fixation with copious footnotes, excessive literature reviews, trendy topics, or an author’s institutional affiliation or prior publications (as opposed to the one in front of them). These last items are of particular concern for law professors because it stacks the deck against an objective evaluation of their work. And placement has significant collateral effects on pay, promotion, and the lateral market.
Fortunately, the Nance-Steinberg study opens the black box of article selection. In my estimation, it reveals a reasonably fair and objective process in which original, persuasive, and polished articles have at least a fighting chance of getting a good placement. Sure, the author’s letterhead matters. But these results suggest that this bias is actually less true at more elite journals—and that finding itself may be a bit of a mythbuster. Moreover, the study actually has some suggestions that could, at the margins, help with an article’s placement.
To get all of our readers onto the same page, after the jump I will briefly summarize the sample, methodology, and key findings of the Nance-Steinberg study.
Sample & Methodology
Drawing upon the experience of several Penn faculty members and former Penn Law Review Articles Editors, Nance and Steinberg developed a questionnaire that asked respondents to rate the positive or negative influence of 56 discrete factors on a 7 point (-3 to +3) Likert-type scale. [The survey instrument can be viewed here.] They also asked a second set of questions which asked respondents to ordinally rank the importance of seven attributes in the law selection process:
- Potential to influence scholarship
- Persuasiveness of arguments
- Originality of arguments
- Timeliness of topic
- Potential to change substantive law
- Notability of author
Using an online commercial company to host the electronic survey, invitations were sent to editors at approximately 400 student-edited law journals. The instructions asked that the survey be completed only by editors with the authority to extend offers of publication. After two follow-up invitations, this process produced 191 responses from editors at 161 journals.
The authors do not discuss the survey response rate or whether the respondents appear to be evenly distributed through the law school hierarchy. They do, however, list the participating schools by nine “Tier” categories based on Washington & Lee impact factors [the responding journals, by Tier, can be view here.]. My own visual inspection revealed no obvious bias. There are quite a few elite and non-elite flagship law reviews and a wide array of specialty journals.
To check the internal consistency of the questionnaires and make the items more tractable, the authors used factor analysis. (For those unfamiliar with this technique, the most familiar application of factor analysis is I.Q.) This is an inductive process that, ideally, extracts and separates “factors” from the data (typically, the fewer the better) in way that supports the researchers’ theory. In this case, the reduction process separated into 18 factors that were theoretically coherent. The few judgment calls they had to make (discussed in footnote 79) are, to my mind, not very controversial. Only a handful of questionnaire items needed to be excluded.
Every good empirical study should have one table that conveys the study's most significant findings. In the Nance-Steinberg study, that is arguably Table 3, which ordinally ranks the 18 factors from positive to negative influence in selecting an article for publication (3 = strong positive, 2= positive, 1 = weak positive, 0 = no influence, ... -3 = strong negative influence). As the authors note, the Cronbach α statistic is used to evaluate the internal consistency of the factor vis-à-vis the variables used to produce it. 14 of 18 factors are .60 or higher, which is a commonly used threshold for evaluating reliability.
Each of the 18 factors is functionally a rubric that is derived from two to five questionnaire items, which are summarized in Tables 4 through 21. Assuming that law review editors are agents who are seeking to maximize the influence of their volume (and ultimately their law review franchise), this ordinal ranking makes a lot of sense. The most important criterion (based on five probative questions) is selecting articles that people will want to read.
The next two factors, Author Prestige and Peer Support, are arguably factors that affect the journal's long-term institutional interests. The Author Prestige construct is derived from an author’s prior publications (volume and placement) and institutional affiliation, and Peer Support includes review and unsolicited communications from faculty members (i.e., peers) in support of an article. But when these factors are broken down according to relative journal prestige, the influence at more elite journals tends to be less pronounced.
In my opinion, the authors were not sufficiently careful in generating their tables. When doing their tier analysis, histograms would have been much better than sprawling tables. And frankly, I don’t know why they use ANOVA on nine Washington & Lee impact tiers when it is far from clear that the continuum of prestige or impact supports that level of granularity.
Another way of stating this: what are the professional implications of publishing in W&L Tier 9 (456-540 in impact) versus Tier 4 (131-195)? I suspect every law professor would say “not much.” What about Tier 1 (1-25), which strongly resembles the top of U.S. News rankings, versus Tier 2 (26-65)? This second difference is arguably career making. A visual inspection of Table 23 (excerpted below), which breaks down the respondents' ordering of seven publication factors based on relative influence, suggests large and significant differences between W&L Tier 1 versus Tier 2.
Comparing Tier 1 to Tier 2, the data suggest:
- Tier 1 and 2 editors place similar emphasis on “potential to influence scholarship” and “readability of article”;
- Tier 1 editors place greater emphasis on (a) persuasiveness of arguments, (b) originality of arguments, and (c) likelihood to change the law;
- Tier 1 editors are less likely than their Tier 2 counterparts to be influenced by (d) timeliness of topic (which presumably includes trendiness), and (e) notability of the author.
In my opinion, the authors would have been better served by a simple independent sample t-test to compare differences between elite and non-elite law journals. Such a comparison would likely show strong and unambiguous differences between these two groups.
One way to read these results is that editors at Tier 1 journals are at the top of the foodchain (i.e., at the end of the expedite express) and thus have the luxury of thinking for themselves. Moreover, as the most accomplished students at the most selective schools, I presume that they also have the ability to evaluate original and persuasive arguments and the potential that a submission could change the law (if not now, then in two years, when they join a faculty themselves?). I doubt that Nance and Steinberg will quibble with that conclusion.