This study grows out of our own experiences as Articles Editors of the University of Pennsylvania Law Review. The Articles Office spent many of our early meetings talking about what criteria the law review ought to be using to select articles. In the wake of those discussions, we were curious about the degree to which our peer journals would make use of the same criteria and weigh them the same way. This study started as a way to answer that question.
Our first startling discovery was our fellow editors’ eagerness to talk about these issues. We sent out an e-mail to every student-edited legal journal for which we could find an e-mail address and asked one or more of the editors responsible for selecting articles at that publication to fill out our survey instrument. Though it’s impossible to know exactly how many of those e-mail addresses were valid, we believe that our e-mail reached between 375 and 400 journals. The 191 responses from 163 journals were far more than we expected and gave us a data set that was well-suited to productive analysis. We chose to focus on factor analysis because we believed that the specific criteria that were amenable to survey questions were properly understood as components of broader selection criteria. Our results bore that out.
While there is obvious interest within the academic community in our simply reporting these results, we think our data also highlight some interesting aspects of the law review process that have been little discussed. The most notable is the degree to which journals act as independent agents rather than as neutral arbiters of quality scholarship. (We have consciously avoided taking on the question of whether students are capable of filling a role as neutral arbiters, but commentators have frequently argued that they are not.) The primary measure of journal prestige (and the one we used in our analysis) is frequency of citation. If the primary goal of editors is to increase the notability and prestige of their own journal – and our results can be read as indicating that it is – the best way to do that is to publish articles that will be read and cited frequently. While that goal may correlate to some degree with an abstract notion of academic excellence or importance, it also draws on a number of other factors such as author notoriety or prestige and the frequency with which related topics are addressed in legal academic writing. Thus, it is possible to explain editors’ tendency to gravitate towards articles by well-known authors at prestigious institutions (and our survey confirms that a strong tendency to do this is present across the board) or to articles in certain subject areas (most notably constitutional law) not as a product of their inability to recognize academic excellence but as the result of rational desire to increase the prestige of their own publications. As Bill mentioned in his introduction, this is a job that student editors can likely do pretty well.
When undertaking this study, we were struck by the relative dearth of empirical work in this area. Our hope is that, by providing some robust data about what law review editors actually do, we can move the debate about what they should do away from the anecdotal context of particular authors’ horror stories to a broader context based on a deeper understanding of how the process actually works today.
Under Article IV of the Constitution, which outlines the relationship between the states,
Posted by: tadalafil | 21 April 2010 at 05:03 PM
hello friend thanks for sharing this post about Forum Post #2: Some Context from the Authors thanks !!!
Posted by: generic valtrex | 01 February 2010 at 02:27 PM
I also believe that there is a self-reporting bias, and, having a background in social science research, I can tell you that this problem permeates through several disciplines. Short of orchestrating some type of experimental design, it is something that social scientists have learned to live with. We do the best with the data that is available to us. We hope that survey participants are honest in their responses; and we hope that by keeping the survey anonymous, it will encourage honesty. I think it does for the most part. And, as long as that self-reporting bias is sufficient “random,” it shouldn’t affect the results too much.
I suppose we could address that fact in our study, but I think most readers are already generally familiar with that bias and take it into account as they evaluate our findings.
Posted by: Jason Nance | 14 August 2007 at 02:04 PM
While I don't think Articles Editors are participating in a charade, I think the potential for self-reporting bias is real (and probably needs to be addressed head on in the paper, which it currently is not). There are two sources of a gap between reported results and "true" results that I think could have crept into our results.
The first I'll call an embarrassment bias: editors may be embarrassed to admit that they discard articles based on superficial criteria. Even though the survey was anonymous, I think there's some risk that editors will have underreported their reliance on certain criteria out of a sense that they "shouldn't" rely on them as much as they do.
The second potential source of bias is what I'll call an aspirational bias. If you ask an editor (as we did) how important the dgree of scholarly contribution of an article is, very few editors are going to say it's not important. But it's an open question whether student editors are really in any position to make a judgment on that question. I know that, when I was making article decisions, I paid a lot of attention to whether an article was making a scholarly contribution. But, truth be told, my perception of scholarly contribution and that of the actual scholars dealing with similar issues (i.e. the "true" perception of scholarly contribution) are almost certain to be different.
I don't think either of those factors makes our data less valid or interesting (in fact, if we're interested in how well articles editors do their job, we first have to understand what they think that job is and what factors they understand to be important), but I do think they both need to be considered when we look at what the results mean.
Posted by: Dylan Steinberg | 14 August 2007 at 01:12 PM
Dylan and Jason,
In the comments to the introduction, Michael Heise suggested that your study looks at what editors "say" influences the selection process rather the factors that actually make a difference. I know that Chris Zorn will further explore this important point.
But I am willing to go out on a limb and be skeptical that the alleged difference is large. I understand it is possible; but I think it is important to discuss the likelihood of such a disparity and our analytical basis for drawing this conclusion.
The selection process for Articles Editors ensure they possess a lot of naive intelligence. In addition, virtually every law review board has a meeting on the selection process that your described above; I have zero basis to believe that these folks are engaging in a charade. Articles editors and EIC's are generally smart, ambitious, and confident people who don't necessarily want to be someone else's patsy.
Absence a good story of self-delusion or strategic behavior (what is really at stake here?), I think it is worth considering the literal reading of the data.
Posted by: Bill Henderson | 14 August 2007 at 12:26 PM
Dylan: Setting aside any technical issues with your (and your coauthor's) study, I want to commend both of you for even *thinking* about the issue seriously, let alone for undertaking the study. As law review editors it's not like you had a lot of free time to burn.
Posted by: Michael Heise | 14 August 2007 at 10:33 AM