« Guest Blogger: Christopher Zorn | Main | Lott v. Levitt: Shermer in Scientific American »

21 August 2006

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d83451b58069e200d834e1eefa69e2

Listed below are links to weblogs that reference Peer Review Redux:

Comments

Richard Lempert

I have been away so apologies for such a delayed comment (If anybody is still reading.) I found, as I always do, all of Chris's comments thoughtful and like him am pulling for data to remain plural. On the peer review issue, my term as editor of LSR gave me a rather different perspective on the matter. Here is what I took away from the experience: (1) I found that peer reviewers almost always disagreed on their recommendations, though often they agreed on their analysis. People are just calibrated differently when it comes to transforming observations into judgments. One result of this was that one could not, even if one wanted to, delegate the decision to accept or reject a manuscript to the peer reviewers. (2) I read all papers, and the publication decisions were all mine regardless of peer reviewer recommendations. However, when reviews were uniformly negative, I usually felt I did not have to read as closely as I otherwise might have and with only very rare exceptions did I find that a Revise and Resubmit rather than a rejection was appropriate. (3) I always felt I had to answer to the peer reviewers in my decisions. This did not mean that I had to follow their judgments but I had to, at least in my own mind, be able to explain to a negative reviewer why I did not think his/her strong objection meant the work was unpublishable if I decided to request an R&R or to publish, and I had to explain to a positive reviewer why it was that despite his/her view I thought an MS was unpublishable. Thus I felt peer review imposed a valuable discipline on my editorial judgment and (4) I came to the view that peer review had its greatest value not in guiding the acceptance/rejection decision but rather in improving the quality of what was published. Peer reviewers' comments were invaluable in directing, and allowing me to direct, the attention of authors of potentially publishable pieces toward revisions that would make their MSs far better contributions. I can think of several instances where peer reviewers almost took authors by the hand to reveal the form beneath the rough hewn marble that had been submitted. Indeed, almost everything I published was, in a certain sense, a colaboration between author, peer reviewer, editor and production editor. The author was, of course, almost always the most important contributor by far, but the polish that the other participants enabled for many articles changed many a merely good piece into a very good one.
On the other issue of social scientists publishing in law reviews, I am a bit of an idealist here. I think universities should pay little attention to where pieces are published in determining tenure and promotions. They can read the pieces themsleves, and they can ask distinguished scholars, more specialist in the area than a journal editor and more numerous than a journal's peer reviewers, for their judgments. A article in a law review that these readers regard as excellent should count more than an article in ASA, APSR, JPSP, etc. which does not seem as fine. Of course, as I saw when I chaired the UM Socioogy DEpartment, there is a high correlation between an article's appearance in a lead professional journal (ASA& AJS in Sociology) and its general quality. This is to be expected and as it should be, for a field's most distinguished and selective journals tend to publish only high quality work. Similarly articles in law reviews not only are passed on by students who usually have no deep understanding of the article's topic and methods, but they also have not been polished by peer review. But this dichotomy will not always hold. There is some excellent social science published in law reviews and some poor work published even in a field's top journals. Moreover, law reviews publish articles too short for books but too long for a field's top journals. What we have is a problem of heuristic or operationalization. The link between place of publication and quality is easy for people to make while reading and thinking about a piece requires effort, so people and even Departments and University tenure review committees substitute the place a piece is published for a determination of its quality though publication placeand quality are not the same thing. I think we should fight this and continue to convey the message that we must read the work we are evaluating, make serious judgments and try to put the place of publication out of mind. (There are two subsidiary issues, I won't go into. One concerns prestige; regardless of quality more prestige attaches to publications in some journals than in others and a University has reasons to value faculty prestige, and the other concerns objectivity as some universities may be less trustful of otuside judgments by thos ewho know someoe's tenure is on the line, but qith enough tenure reviewers, I don't think this concern is substantial.)

Rick

Sean Wilson

I wonder if one could compare reviews made of a work with opinions given by judges in cases? It seems to me to be a similar kind of cognitive activity. The reviewer has a sense of what is a proper demonstration of a claim (the methods or "the law") and also has a viewpoint about the manuscript conclusion (favorite public policy). The former involves the formation of an opinion about orthodoxy (quantitative, qualitative, historical, etc.) which is analogous to the protocol options of a judge: restraint, formalism, originalism brand A or B, etc. The latter involves the formation of a belief about the propriety of preserving or disrupting a disciplinary faith (liberal or conservative).

It would be interesting, for example, to do a study of rejected papers that were critical of the attitudinal model versus supportive, or ones that are critical of all policy-maximizing approaches versus supportive. It would be interesting to see whether attitudinalism is a factor in reviewer opinions.

Tracy Lightcap

This is a good topic. There are some real problems with present review processes. I'll use my own case as an example. I recently submitted to a prestigious journal that shall remain nameless (they know who they are). One review was thoughtful and gave me an opening, albeit a small one. The other review was a page and a sentence. Short review: I have some minor problems with a measure or two, but I don't think the paper is interesting enough. Did the reviewer say why? No. Did the reviewer offer any - I mean any - comments that might lead to improvements? No. Did the editor, who actually liked the paper a good deal, have any choice but to reject the manuscript? No. On the basis of a sample with N = 1.

This kind of procedure has real problems. So does the Nature idea. Sooooo ... let's get modern, shall we? What if the reviews were posted on a wiki? Any review submitted could be edited by a subsequent reviewer or another review could be posted. The kind of review I got - which was, I might say, worse then useless - would never be submitted in the first place.

Yeah, that'll happen.

Sean Wilson

It certainly would discourage elites from preserving their hegemony and would help combat the notion that paradigm-shifting work must come from the top down before a regimented discourse can be open to scholarly criticism. It would seem, therefore, to advance knowledge more than inhibit it.

William Henderson

Chris,

I like the idea of an open peer review, especially once a reviewer has cleared the tenure hurdle. The transparency operates in two directions: 1) reviewers are more accountable for errors that remain after a cursory read, which promotes more careful reviews; and 2) comments that are ill-tempered or self-interested are less likely to be made in the first instance.

I realize that there is a opposing argument that reviewers will hold their tongues and/or pander because of lateral mobility and symposia invitations, etc. But I am not convinced these pitfalls outweigh the lost benefits of an open system. Why not do your best work, engage in honest, direct dialogue with colleagues, and let the chips fall where they may? bh.

Sean Wilson

Nice post Chris!

The comments to this entry are closed.

Conferences

September 2014

Sun Mon Tue Wed Thu Fri Sat
  1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30        

Site Meter


Creative Commons License


  • Creative Commons License
Blog powered by Typepad