What do you do with a null results dataset? A colleague wanted to study trends in tortious interference cases in the U.S. over the last 20 years. He had some firm priors but was going to conduct the study with an open mind (hence the title of this post). We designed a data collection protocol, pulled a sample of 100 cases (an adequate number given his priors) and set an RA to work. Nothing. We redesigned the protocol, pulled another sample, set another RA to work, and again nothing. After 8 months there are no meaningful temporal or cross-sectional variations in the data. These are all civil jury trials so there may be a story about the continuity of juries. But he's not that interested in collecting more data, or even writing up what he's learned so far. No one wants to read about what he didn't find.
There is a publication bias against negative results. Michael Heise blogged about it here a couple of years ago. Attempts have been made to unbias the bias, but success is limited. Most of us are probably not in the mood to finish a 50+ page paper with null results, so the idea of nurturing an ELS journal or SSRN site for that purpose is not high on my list. The data, on the other hand, are finished: collected, cleaned, documented. There is nothing wrong with them, yet they are orphaned on a hard drive. Is there a data orphanage?
Nice information this is really interesting
http://www.buyvimaxpills.com
Posted by: vimax | 07 June 2009 at 11:38 PM
I know of no "data orphanages," but it's an issue that many disciplines have wrestled with. Negative pharmaceutical data is probably the most high-profile.
When I was a graduate student in science, my field was plagued by a theory / method that no one could prove worked. Its developer insisted it worked wonderfully, but no one could repeat his successes. I shudder to think how many student theses were broken on that rock.
Finally, I saw a series of published articles, taking up about 20 pages in a major research journal. All by the same student and his advisor, one after the other: "X's Method Fails to Accomplish Y in Organism A," "X's Method Fails to Accomplish Y in Organism B," "X's Method Fails to Accomplish Y in Organism C," ...
What a hero, but how sad for him. A whole thesis of negative data.
I say "publish." Publish it as a research note, with the full data and analyses on the web.
Posted by: Patent_Medicine | 15 December 2008 at 02:11 PM
How prevalent is this bias against "negative" results? It certainly doesn't exist in the education literature, where you'll find all sorts of scholars and journals who will leap at the chance to publish a study finding that vouchers or charter schools have "no effect" on test scores, etc.
Posted by: Stuart Buck | 07 December 2008 at 11:13 PM
If he can't publish it anywhere else, why doesn't he put it on the internet? Anyone who could incorporate it into their own research or theory could discover his results through a search, and properly credit him. Better than sitting undiscoverable on a hard drive.
Posted by: Hopefully Anonymous | 04 December 2008 at 04:20 AM
A common problem in many disciplines. Link requires subscription:
http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=8802312&site=ehost-live
And for football fans, in tracking down the Science News article linked above that I remembered reading, I came across this re: football's OT bias. Important stuff.
http://www.sciencenews.org/view/generic/id/5582/title/Math_Trek__Footballs_Overtime_Bias
Posted by: Julie Jones | 24 November 2008 at 08:09 AM
I agree with the previous comment that a study like that should be published, but perhaps not as a fifty-page article. Why not a 5-page research note?
The other problem with null results is that they hint at possible methodological problems. Perhaps a meaningful relationship does exist between the variables of interest, but some problem -- perhaps in the data -- is masking it.
Posted by: Sean Overland | 22 November 2008 at 11:23 AM
If the research question is well-motivated and a reasonable power calculation has been done (to assure the study was large enough to detect an effect if it existed), there is no good reason for journals not to publish negative results. The key question is whether the question is of sufficient interest to warrant publication, not whether results are positive or negative. If the tortious interference scholars had published priors about the existence of a trend, and tortious interference is an important topic, and the article is otherwise sound, it should be published. For whatever interest it may be, JELS does not reject articles simply because of insignificant results.
Posted by: Theodore Eisenberg | 21 November 2008 at 09:21 PM