and then found no evidence that such powers exist? Any scientist trying to publish such a ‘So what?’ finding would struggle to get a journal to take it seriously, at the best of times. Even with the clear target of Bem’s paper on precognition, which was widely covered in serious newspapers across Europe and the USA, the academic journal with a proven recent interest in the question of precognition simply refused to publish a paper with a negative result. Yet replicating these findings was key – Bem himself said so in his paper – so keeping track of the negative replications is vital too.
People working in real labs will tell you that sometimes an experiment can fail to produce a positive result many times before the outcome you’re hoping for appears. What does that mean? Sometimes the failures will be the result of legitimate technical problems; but sometimes they will be vitally important statistical context, perhaps even calling the main finding of the research into question. Many research findings, remember, are not absolute black-and-white outcomes, but fragile statistical correlations. Under our current system, most of this contextual information about failure is just brushed under the carpet, and this has huge ramifications for the cost of replicating research, in ways that are not immediately obvious. For example, researchers failing to replicate an initial finding may not know if they’ve failed because the original result was an overstated fluke, or because they’ve made some kind of mistake in their methods. In fact, the cost of proving that a finding was wrong is vastly greater than the cost of making it in the first place, because you need to run the experiment many more times to prove the absence of a finding, simply because of the way that the statistics of detecting weak effects work; and you also need to be absolutely certain that you’ve excluded all technical problems, to avoid getting egg on your face if your replication turns out to have been inadequate. These barriers to refutation may partly explain why it’s so easy to get away with publishing findings that ultimately turn out to be wrong. 30
Publication bias is not just a problem in the more abstract corners of psychology research. In 2012 a group of researchers reported in the journal Nature how they tried to replicate fifty-three early laboratory studies of promising targets for cancer treatments: forty-seven of the fifty-three could not be replicated. 31 This study has serious implications for the development of new drugs in medicine, because such unreplicable findings are not simply an abstract academic issue: researchers build theories on the back of them, trust that they’re valid, and investigate the same idea using other methods. If they are simply being led down the garden path, chasing up fluke errors, then huge amounts of research money and effort are being wasted, and the discovery of new medical treatments is being seriously retarded.
The authors of the study were clear on both the cause of and the solution for this problem. Fluke findings, they explained, are often more likely to be submitted to journals – and more likely to be published – than boring, negative ones. We should give more incentives to academics for publishing negative results; but we should also give them more opportunity.
This means changing the behaviour of academic journals, and here we are faced with a problem. Although they are usually academics themselves, journal editors have their own interests and agendas, and have more in common with everyday journalists and newspaper editors than some of them might wish to admit, as the episode of the precognition experiment above illustrates very clearly. Whether journals like this are a sensible model for communicating research at all is a hotly debated subject in academia, but this is the current situation. Journals are the gatekeepers, they make decisions on what’s relevant and interesting for their