The Situationist

Double-Checking Our Science

Posted by The Situationist Staff on April 19, 2012

From the Chronicle of Higher Education:

If you’re a psychologist, the news has to make you a little nervous—particularly if you’re a psychologist who published an article in 2008 in any of these three journals: Psychological Science, the Journal of Personality and Social Psychology, or the Journal of Experimental Psychology: Learning, Memory, and Cognition.

Because, if you did, someone is going to check your work. A group of researchers have already begun what they’ve dubbed the Reproducibility Project, which aims to replicate every study from those three journals for that one year. The project is part of Open Science Framework, a group interested in scientific values, and its stated mission is to “estimate the reproducibility of a sample of studies from the scientific literature.” This is a more polite way of saying “We want to see how much of what gets published turns out to be bunk.”

For decades, literally, there has been talk about whether what makes it into the pages of psychology journals—or the journals of other disciplines, for that matter—is actually, you know, true. Researchers anxious for novel, significant, career-making findings have an incentive to publish their successes while neglecting to mention their failures. It’s what the psychologist Robert Rosenthal named “the file drawer effect.” So if an experiment is run ten times but pans out only once you trumpet the exception rather than the rule. Or perhaps a researcher is unconsciously biasing a study somehow. Or maybe he or she is flat-out faking results, which is not unheard of. Diederik Stapel, we’re looking at you.

So why not check? Well, for a lot of reasons. It’s time-consuming and doesn’t do much for your career to replicate other researchers’ findings. Journal editors aren’t exactly jazzed about publishing replications. And potentially undermining someone else’s research is not a good way to make friends.

[Situationist Contributor] Brian Nosek knows all that and he’s doing it anyway. Nosek, a professor of psychology at the University of Virginia, is one of the coordinators of the project. He’s careful not to make it sound as if he’s attacking his own field. “The project does not aim to single out anybody,” he says. He notes that being unable to replicate a finding is not the same as discovering that the finding is false. It’s not always possible to match research methods precisely, and researchers performing replications can make mistakes, too.

But still. If it turns out that a sizable percentage (a quarter? half?) of the results published in these three top psychology journals can’t be replicated, it’s not going to reflect well on the field or on the researchers whose papers didn’t pass the test. In the long run, coming to grips with the scope of the problem is almost certainly beneficial for everyone. In the short run, it might get ugly.

Nosek told Science that a senior colleague warned him not to take this on “because psychology is under threat and this could make us look bad.” In a Google discussion group, one of the researchers involved in the project wrote that it was important to stay “on message” and portray the effort to the news media as “protecting our science, not tearing it down.”

The researchers point out, fairly, that it’s not just social psychology that has to deal with this issue. Recently, a scientist named C. Glenn Begley attempted to replicate 53 cancer studies he deemed landmark publications. He could only replicate six. Six! Last December I interviewed Christopher Chabris about his paper titled “Most Reported Genetic Associations with General Intelligence Are Probably False Positives.” Most!

A related new endeavour called Psych File Drawer allows psychologists to upload their attempts to replicate studies. So far nine studies have been uploaded and only three of them were successes.

Both Psych File Drawer and the Reproducibility Project were started in part because it’s hard to get a replication published even when a study cries out for one. For instance, Daryl J. Bem’s 2011 study that seemed to prove that extra-sensory perception is real — that subjects could, in a limited sense, predict the future — got no shortage of attention and seemed to turn everything we know about the world upside-down.

Yet when Stuart Ritchie, a doctoral student in psychology at the University of Edinburgh, and two colleagues failed to replicate his findings, they had a heck of a time getting the results into print (they finally did, just recently, after months of trying). It may not be a coincidence that the journal that published Bem’s findings, the Journal of Personality and Social Psychology, is one of the three selected for scrutiny.

Nosek acknowledges that Bem’s study and Stapel’s fraud were among the motivators for the project. “Right now we have an opportunity to do something about it rather than writing another article about what we can do about it,” he says. He hopes that the replications for all three journals will be completed by the fall and the results published online next spring.

Like most researchers, Nosek is interested in advancing his own research agenda rather than simply running someone else’s experiments. That said, he thinks it’s better for researchers to know whether they’re discovering “true stuff” or just fooling themselves, their colleagues, and the general public. “Ultimately it’s a waste of everyone’s time if I can’t replicate the effects,” he says. “Otherwise, what are we working on?

More.

Image from Flickr.

Sorry, the comment form is closed at this time.