Author Archives: Sanjay Srivastava

Norms for the Big Five Inventory and other personality measures – Sanjay Srivastava (The Hardest Science)

Every once in a while I get emails asking me about norms for the Big Five Inventory. I got one the other day, and I figured that if more than one person has asked about it, it’s probably worth a blog post.

There’s a way of thinking about norms — which I suspect is the most common way of thinking about norms — that treats them as some sort of absolute interpretive framework. The idea is that you could tell somebody, hey, if you got this score on the Agreeableness scale, it means you have this amount of agreeableness.

But I generally think that’s not the right way of thinking about it. Lew Goldberg put it this way:

One should be very wary of using canned “norms” because it isn’t obvious that one could ever find a population of which one’s present sample is a representative subset. Most “norms” are misleading, and therefore they should not be used.

That is because “norms” are always calculated in reference to some particular sample, drawn from some particular population (which BTW is pretty much never “the population of all human beings”). Norms are most emphatically NOT an absolute interpretation — they are unavoidably comparative.

So the problem arises because the usual way people talk about norms tends to bury that fact. Continue reading

What counts as a successful or failed replication? – Sanjay Srivastava (The Hardest Science)

Let’s say that some theory states that people in psychological state A1 will engage in behavior B more than people in psychological state A2. Suppose that, a priori, the theory allows us to make this directional prediction, but not a prediction about the size of the effect.

A researcher designs an experiment — call this Study 1 — in which she manipulates A1 versus A2 and then measures B. Consistent with the theory, the result of Study 1 shows more of behavior B in condition A1 than A2. The effect size is d=0.8 (a large effect). A null hypothesis significance test shows that the effect is significantly different from zero, p<.05.

Now Researcher #2 comes along and conducts Study 2. The procedures of Study 2 copy Study 1 as closely as possible — the same manipulation of A, the same measure of B, etc. The result of Study 2 shows more of behavior B in condition A1 than in A2 — same direction as Study 1. In Study 2, the effect size is d=0.3 (a smallish effect). Continue reading

Paul Meehl on replication and significance testing – Sanjay Srivastava (The Hardest Science)

Still very relevant today.

A scientific study amounts essentially to a “recipe,” telling how to prepare the same kind of cake the recipe writer did. If other competent cooks can’t bake the same kind of cake following the recipe, then there is something wrong with the recipe as described by the first cook. If they can, then, the recipe is all right, and has probative value for the theory. It is hard to avoid the thrust of the claim: If I describe my study so that you can replicate my results, and enough of you do so, it doesn’t matter whether any of us did a significance test; whereas if I describe my study in such a way that the rest of you cannot duplicate my results, others will not believe me, or use my findings to corroborate or refute a theory, even if I did reach statistical significance. So if my work is replicable, the significance test is unnecessary; if my work is not replicable, the significance test is useless. I have never heard a satisfactory reply to that powerful argument.

Meehl, P. E. (1990). Appraising and amending theories: The strategy of Lakatosian defense and two principles that warrant using it. Psychological Inquiry, 1, 108-141, 173-180. [PDF]

Continue reading

A Pottery Barn rule for scientific journals – Sanjay Srivastava (The Hardest Science)

Proposed: Once a journal has published a study, it becomes responsible for publishing direct replications of that study. Publication is subject to editorial review of technical merit but is not dependent on outcome. Replications shall be published as brief reports in an online supplement, linked from the electronic version of the original.

*****

I wrote about this idea a year ago when JPSP refused to publish a paper that failed to replicate one of Daryl Bem’s notorious ESP studies. I discovered, immediately after writing up the blog post, that other people were thinking along similar lines. Since then I have heard versions of the idea come up here and there. And strands of it came up again in David Funder’s post on replication (“[replication] studies should, ideally, be published in the same journal that promulgated the original, misleading conclusion”) and the comments to it. When a lot of people are coming up with similar solutions to a problem, that’s probably a sign of something.

Like a lot of people, I believe that the key to improving our science is through incentives. You can finger-wag about the importance of replication all you want, but if there is nowhere to publish and no benefit for trying, you are not going to change behavior. To a large extent, the incentives for individual researchers are controlled through institutions — established journal publishers, professional societies, granting agencies, etc. So if you want to change researchers’ behavior, target those institutions. Continue reading

Are social psychologists biased against conservatives, or do they just think they are? – Sanjay Srivastava (The Hardest Science)

A new paper coming out next month by Yoel Inbar and Joris Lammers proposes that some social psychologists discriminate against conservatives in hiring and other professional decisions. Inside Higher Ed has the scoop:

Numerous surveys have found that professors, especially those in some disciplines, are to the left of the general public. But those same — and other — surveys have rarely found evidence that left-leaning academics discriminate on the basis of politics…

A new study, however, challenges that assumption — at least in the field of social psychology… Just over 37 percent of [social psychologists] surveyed said that, given equally qualified candidates for a job, they would support the hiring of a liberal candidate over a conservative candidate. Smaller percentages agreed that a “conservative perspective” would negatively influence their odds of supporting a paper for inclusion in a journal or a proposal for a grant.

Here’s an interesting thing though… social psychology as a field of research is heavily involved in studying implicit biases. And there is a long tradition in social psych of studies showing that people do not have access to the psychological processes that produce these biases and cannot even recognize that they have biases.

Here’s an example of the kind of questions used for evidence of bias:

For the next set of questions, we are interested in what you think YOU WOULD DO in specific situations.

1. If you were reviewing a research grant application that seemed to you to take a politically conservative perspective, do you think this would negatively influence your decision on the grant application? Continue reading