My Ph.D. thesis director once advised me that "if it's worth doing, it's worth doing badly." His point was not to make the perfect the enemy of the good, particularly when conducting truly original research. So it's important to preface any critique of their work by acknowledging that, whatever flaws their study may have, Nance and Steinberg have done a great service to the legal academy by shedding some empirical light on the question of law review publication.

As is the case with any empirical paper, methodological criticisms of their work are among the easiest to offer. One could question their decision to use conventional factor analysis with ordinally-measured response variables (particularly when better techniques exist), or their extensive use of tables when figures (such as the one above; click on chart to enlarge) typically do a much better job of conveying complex statistical results.
My biggest concern, however (and one prefaced by Michael's comment to Bill's first Forum post) is the effect of social desirability bias (hereinafter SDB) on the study's findings. SDB refers to survey respondents' tendency to answer surveys in manners they think are socially (or, here, professionally) desirable or expected of them; it is a well-known and commonly-observed phenomenon in survey research (a recent paper with a list of current references is Streb et al.). I'd contend that the presence and effect of such bias can explain both their intuitive findings as well as some of the more unexpected ones.
Articles editors (AEs) undoubtedly are interested in growing the prestige of their journal, and in minimizing their editorial workload. They are also, however, socialized into the law review culture; they understand that law reviews, as forums for scholarly work, should publish the "best" (most original, creative, important, well-reasoned, persuasive) scholarship they can. As AEs, their professional role is to select such work for publication, and to do so in a way that doesn't systematically disfavor authors or work on the basis of other (putatively irrelevant) criteria. SDB suggests that AE's survey responses will likely reflect their desire to be seen as conforming to that role.
Consider Nance and Steinberg's rather odd finding that, while "Author Prestige" is among the most influential of their constructs, "Notability of the Author" ranks dead last in the rankings of publication criteria. The phrasing of the authors' 56 "influence" questions is such that none is dispositive; each can influence the publication process without making or breaking a given paper. In contrast, asking AEs to rank order the seven publication criteria forces a zero-sum choice: for one criterion to be ranked higher, another must be ranked lower. That, combined with the relatively small number of items to be ranked and the presence of SDB effects, makes it difficult for an AE to place "Notability" high in the rankings.
A similar dynamic might explain the relative weakness of "negative" author traits: while AEs can be forgiven for privileging work by high-prestige authors, it is considered much less acceptable to disadvantage low-prestige ones. Finally, in our post-Grutter world, it is likely that most AEs almost reflexively responded "no influence" to questions regarding the effects of author race and gender.
But while SDB is a potentially serious problem, it is by no means insurmountable. A standard way of assessing the presence of SDB is to compare survey responses with actual behavior; as Michael suggested in his earlier comment, the obvious means of doing this would be to analyze data on actual submissions and acceptances. Barring that, SDB can be reduced in surveys through anonymity; survey respondents who are assured that their answers will be anonymous are typically less affected by SDB than those who can be identified.
Recent Comments