This study grows out of our own experiences as Articles Editors of the University of Pennsylvania Law Review. The Articles Office spent many of our early meetings talking about what criteria the law review ought to be using to select articles. In the wake of those discussions, we were curious about the degree to which our peer journals would make use of the same criteria and weigh them the same way. This study started as a way to answer that question.
Our first startling discovery was our fellow editors’ eagerness to talk about these issues. We sent out an e-mail to every student-edited legal journal for which we could find an e-mail address and asked one or more of the editors responsible for selecting articles at that publication to fill out our survey instrument. Though it’s impossible to know exactly how many of those e-mail addresses were valid, we believe that our e-mail reached between 375 and 400 journals. The 191 responses from 163 journals were far more than we expected and gave us a data set that was well-suited to productive analysis. We chose to focus on factor analysis because we believed that the specific criteria that were amenable to survey questions were properly understood as components of broader selection criteria. Our results bore that out.
While there is obvious interest within the academic community in our simply reporting these results, we think our data also highlight some interesting aspects of the law review process that have been little discussed. The most notable is the degree to which journals act as independent agents rather than as neutral arbiters of quality scholarship. (We have consciously avoided taking on the question of whether students are capable of filling a role as neutral arbiters, but commentators have frequently argued that they are not.) The primary measure of journal prestige (and the one we used in our analysis) is frequency of citation. If the primary goal of editors is to increase the notability and prestige of their own journal – and our results can be read as indicating that it is – the best way to do that is to publish articles that will be read and cited frequently. While that goal may correlate to some degree with an abstract notion of academic excellence or importance, it also draws on a number of other factors such as author notoriety or prestige and the frequency with which related topics are addressed in legal academic writing. Thus, it is possible to explain editors’ tendency to gravitate towards articles by well-known authors at prestigious institutions (and our survey confirms that a strong tendency to do this is present across the board) or to articles in certain subject areas (most notably constitutional law) not as a product of their inability to recognize academic excellence but as the result of rational desire to increase the prestige of their own publications. As Bill mentioned in his introduction, this is a job that student editors can likely do pretty well.
When undertaking this study, we were struck by the relative dearth of empirical work in this area. Our hope is that, by providing some robust data about what law review editors actually do, we can move the debate about what they should do away from the anecdotal context of particular authors’ horror stories to a broader context based on a deeper understanding of how the process actually works today.