Tag Archives: research methods

The Psychology of the "Psychology Isn’t a Science" Argument – Michael Kraus (Psych Your Mind)

Tom knows a pseudoscience when he sees one! (wikipedia.org)
Every so often the internet is set ablaze with opinion pieces on a familiar question: Are "soft" sciences, like psychology, actually science? Most of the time the argument against psychology as a science comes from people from the so-called harder sciences (you know, people who don't know ish about psychology). Of course, every once in a while we throw ourselves under the bus by declaring that for our softer sciences to be taken seriously, we must be more like the real sciences. You're still reading this so most likely you are interested in my opinion on this topic. With a quick nod to others who have covered this topic herehere, here, and here, let's review some of the arguments for and against psychology as a science in what follows.
Read More->

Quality v. Quantity in Publication – Michael Kraus (Psych Your Mind)

Einstein says Quality not Quantity (source)
I was on twitter the other day (mwkraus, why aren't you following me?) and my twitter feed displayed a great quote from Albert Einstein with some important career advice for aspiring scientists: He said something like "a career in which one is forced to produce scientific writings in great amounts creates a danger of intellectual superficiality." This quote got me wondering about the career trajectories of aspiring social psychologists, and the tension between wanting to publish as much as possible, and wanting to publish only the very best research. I consider this tension in today's blog.




Read More->

This is NOT advice for first year faculty – Michael Kraus (Psych Your Mind)

Hello again, PYM readers. It is now June and I just finished my first full academic year as a faculty member at the University of Illinois, Urbana-Champaign (Chambana). Having just passed through the rabbit hole, I have returned mostly unscathed to blog a bit about my experience. As this is just my first year, I don't have any advice that will help others who are transitioning to professor-hood, rather, this post reflects some of the things that I think people (like me) deal with during their transition to a new faculty job. Onward!

Read More->

Have Your Cake and Eat It Too! Practical Reform in Social Psychology – Michael Kraus (Psych Your Mind)

The cake we can (1) have, and (2) eat!
If you have been following recent headlines in the social sciences then you are aware that the field of social psychology has been in some rough water over the past three years. In this time period, we've had our flagship journal publish a series of studies providing evidence that ESP exists (and then refuse to publish non-replications of these studies). We've suffered through at least three instances of scientific fraud perpetrated by high profile researchers who engaged in egregious scientific misconduct. We've had an entire popular area of research come under attack because researchers have failed to replicate its effects. And several respected members of the science community have had some harsh words to say about the discipline and its methods.

Listing all of these events in succession makes me feel a bit ashamed to call myself a social psychologist. Clearly our field has been lacking both oversight and leadership if all of this could happen in such a brief period. Now, I'm not one to tuck my tail between my legs. Instead, I've decided to look ahead. I think there are relatively simple changes that social psychologists (even ones without tenure) can make in their research that can shore up our science going forward.
Read More->

Does (effect) Size Matter? – David Funder (funderstorms)

Personality psychologists wallow in effect size; the ubiquitous correlation coefficient, Pearson’s r, is central to nearly every research finding they report.  As a consequence, discussions of relationships between personality variables and outcomes are routinely framed by assessments of their strength.  For example, a landmark paper reviewed predictors of divorce, mortality, and occupational achievement, and concluded that personality traits have associations with these life outcomes that are as strong as or stronger than traditional predictors such as socio-economic status or cognitive ability (Roberts et al., 2007).  This is just one example of how personality psychologists routinely calculate, care about, and even sometimes worry about the size of the relationships between their theoretical variables and their predicted outcomes.

Social psychologists, not so much.  The typical report in experimental social psychology focuses on p-level, the probability of the magnitude of the difference between experimental groups occurring if the null hypothesis of no difference were to be true.   If this probability is .05 or less, then: Success!  While effect sizes (usually Cohen’s d  or, less often, Pearson’s r) are reported more often they they used to be – probably because the APA Publication Manual explicitly requires it (a requirement not always enforced) – the emphasis of the discussion of the theoretical or even the practical importance of the effect typically centers around whether it exists.  The size simply doesn’t matter.

Is this description an unfair caricature of social psychological research practice?  That’s what I thought until recently. Continue reading

Science Utopia (Continued): Methods Integrity Workshop – Michael Kraus (Psych Your Mind)

"Winter is coming." --Ned Stark/Greg Francis
On Friday afternoon I attended a seminar in methods integrity in research (here). The speakers were Hal Pashler of UC San Diego and Greg Francis of Purdue University. In the seminar, the speakers raised a number of interesting points that I think add to last week's post on PYM about questionable research practices (here). I'll summarize the main points that I took from the seminar:

Read More->

Science Utopia: Some Thoughts About Ethics and Publication Bias – Michael Kraus (Psych Your Mind)

Science Utopia, next exit
Psychology's integrity in the public eye has been rocked by recent high profile discoveries of data fabrication (here, here, and here) and several independent realizations that psychologists (this is not unique to our field) tend to engage in data analytic practices that allow researchers to find positive results (here, here, and here). While it can be argued that these are not really new realizations (here), the net effect has turned psychologists to the important question: How do we reform our science?

It's a hard question to answer in one empirical article, or one blog post, and so that's not the focus here. Instead, what I'd like to do is simply point out what I think are the most promising changes that we, as a science, can adopt right now to move toward a solution that will help prevent future data fabrication or the use of biased hypothesis tests. These are not my ideas mind you, rather, they are ideas brought up in the many discussions of research reform (online and in person) that I have had formally and informally with my colleagues. Where possible, I link to the relevant sources for additional information.

Read More->