To see the evil and the good without hiding
You must help me if you can
Doctor, my eyes
Tell me what is wrong Was I unwise to leave them open for so long Jackson Browne
I’m having a hard time reading scientific journal articles lately. No, not because I’m getting old, or because my sight is failing, though both are true. No, I’m having trouble reading journals like JPSP and Psychological Science because I don’t believe, can’t believe the research results that I find there.
Mind you nothing has changed in the journals. You find tightly tuned articles that portray a series of statistically significant findings testing subtle ideas using sample sizes that are barely capable if detecting whether men weigh more than women (Simons, Nelson, & Simonsohn, 2013). Or, in our new and improved publication motif, you find single, underpowered studies, with huge effects that are presented without replication (e.g., short reports). What’s more, if you bother to delve into our history and examine any given “phenomena” that we are famous for in social and personality psychology, you will find a history littered with similar stories; publication after publication with troublingly small sample sizes and commensurate, unbelievably large effect sizes. As we now know, in order to have a statistically significant finding when you employ the typical sample sizes found in our research (n = 50), the effect size must not only be large, but also overestimated. Couple that with the fact that the average power to detect even the unbelievably large effect sizes that we do report is 50% and you arrive at the inevitable conclusion that our current and past research simply does not hold up to scrutiny. Thus, much of the history of our field is unbelievable. Or, to be bit less hyperbolic, some unknown proportion of our findings can’t be trusted. That is to say, we have no history, or at least no history we can trust. Continue reading