« A Call for Introductory Survival Analysis Text Suggestions | Main | New Additions »

May 03, 2006

Comments

Kathryn Stillwell Burton

I hate to raise a potential reason that would point fingers, but perhaps the lack of interest in, or the lack of providing negative results might, in some cases, be influenced by not only hoped for results, but results sought by those providing grant money for the studies. "We need further study," is the most repeated phrase used in science, today, especially in the environmental sciences,on the university level, funded by federal agencies.

Sara Benesh

I think this is FANTASTIC. I've always said I wanted to create such a journal. Kudos to this group for making my dream a reality!

David Lehrer

Thank you very much for your interest and comments on the Journal. The comments point to a number of important things about negative results, particularly that they are defined largely by their relationship to the research project that generated them. They are ‘negative’ or ‘spurious’ only in context. We wonder if taboos or ‘bias’ make researchers think twice before trying to publish results that contradict established truths in the field, or that are inconclusive...but which might nonetheless hold some interest for other researchers. We (the JSpurC editors) have an article in submission to European Political Science on some of the issues mentioned in this thread, which will also be posted soon as a working paper on our website www.jspurc.org. A brief excerpt below addresses some of the issues raised here about how to define negative results and how to distinguish useful negative results from less-useful ones.

How to Spot a Negative Result

Negative results are generally unused or discarded findings that may open new perspectives on the stylised facts of various social science subfields or paths to new research programs. It is difficult to distinguish useful negative results from failed low-quality research, and to assess the correctness of negative results. Applying typical quality criteria of scientific research is not straightforward: To be valuable, negative results must meet certain scientific standards, but at the same time, their defining feature is that they run counter to established criteria of good and publishable research.

The following typology differentiates negative results based on the relationship they bear to the research process that generated them. Inconclusive results are self-contradictory: They in part confirm and in part reject theoretic expectations. Non-results say nothing—they neither confirm nor reject assumptions. Confutative results appear to contradict previous findings and established theories. Ersatz results are empirical findings that bear no clear relationship to any theory.

We welcome your comments, and invite your response to our survey:

http://chnm.gmu.edu/tools/surveys/1771/

Ken Cousins

I'm not sure I exactly understand their goal, but it seems to me that portraying results as "no results" (i.e., inconclusive) versus "results" is usually a simple matter of framing the question in one way or another.

Of course, it could be that "no results" really means poor data, poor methods, and/or poor conceptualization.

frank cross

Well, I always think more published info is better than less.
But I'm a little dubious of these "filedrawer" claims. Statistical methodologies are rigged to produce false negatives. An 85% probability of a relationship is statistically insignificant and perhaps non-publishable. But that's hardly a negation of that relationship.

The comments to this entry are closed.

Conferences

April 2014

Sun Mon Tue Wed Thu Fri Sat
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30      

Site Meter


Creative Commons License


  • Creative Commons License
Blog powered by Typepad