You may know that there is a reproducibility crisis in psychology. Or is there? Wired and Slate both have pieces up reviewing the current debates on whether there is, or isn’t, a crisis. Perhaps the media is biased, but the behavior and explanations who assert there isn’t a crisis seems informative to me. From the Wired piece:
Emotions are running high. Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions. Science hates that. This is how beleaguered Gilbert feels: When I asked if he thought his defensiveness might have colored his interpretation of this data, he hung up on me.
And now, from Slate:
In his lab, Baumeister told me, the letter e task would have been handled differently. First, he’d train his subjects to pick out all the words containing e, until that became an ingrained habit. Only then would he add the second rule, about ignoring words with e’s and nearby vowels. That version of the task requires much more self-control, he says.
Second, he’d have his subjects do the task with pen and paper, instead of on a computer. It might take more self-control, he suggested, to withhold a gross movement of the arm than to stifle a tap of the finger on a keyboard.
If the replication showed us anything, Baumeister says, it’s that the field has gotten hung up on computer-based investigations. “In the olden days there was a craft to running an experiment. You worked with people, and got them into the right psychological state and then measured the consequences. There’s a wish now to have everything be automated so it can be done quickly and easily online.” These days, he continues, there’s less and less actual behavior in the science of behavior. “It’s just sitting at a computer and doing readings.”
Of course even those who accept and promote a replication crisis often feel their own unreplicated work is an exception to the rule. Basically what this is telling us is that psychologists are subject to the sorts of cognitive biases they themselves study. Some researchers though seem to be facing the problems head-on, Reckoning with the past:
To be fair, this is not social psychology’s problem alone. Many other allied areas in psychology might be similarly fraught and I look forward to these other areas scrutinizing their own work—areas like developmental, clinical, industrial/organizational, consumer behavior, organizational behavior, and so on, need an RPP project or Many Labs of their own. Other areas of science face similar problems too.
During my dark moments, I feel like social psychology needs a redo, a fresh start. Where to begin, though? What am I mostly certain about and where can my skepticism end? I feel like there are legitimate things we have learned, but how do we separate wheat from chaff? Do we need to go back and meticulously replicate everything in the past? Or do we use those bias tests Joe Hilgard is so sick and tired of to point us in the right direction? What should I stop teaching to my undergraduates? I don’t have answers to any of these questions.
This blogpost is not going to end on a sunny note. Our problems are real and they run deep. Okay, I do have some hope: I legitimately think our problems are solvable. I think the calls for more statistical power, greater transparency surrounding null results, and more confirmatory studies can save us. What is not helping is the lack of acknowledgement about the severity of our problems. What is not helping is a reluctance to dig into our past and ask what needs revisiting.
From what I have hear psychological experiments are relatively cheap. Replication is feasible, even if it’s not glamorous. In contrast in biomedicine replication is more expensive. There might therefore be bigger problems lurking out there….