When people ask me what I do, I talk about being a professor and a social psychologist, but first and foremost I am a scientist, though probably not the type with a white lab coat and microscopes you had in mind. My colleagues and I use scientific perspectives and methods to investigate topics like interpersonal relationships, dreams, and morality. Our fields depend on the integrity of the scientific process (generating hypotheses, testing them with sound methods and measures, and running analyses). This article is about a crucial element of relationship science that, until recently, our journal editors have somewhat neglected: replication.
Replication refers to the process of repeating or reproducing others’ scientific work, using the original researchers’ methods and procedures. Let’s say some scientists make a discovery and publish it (for example, the finding that people highly value traits like “warmth” and “reliability” more than “curiosity” and “energetic” in a potential spouse). Other scientists should be able to follow the same protocol (like the same survey questions) and get the same results. In other words, they should be able to duplicate the original findings. This process (known as “direct replication”) is like cloning an experimental procedure with different participants. Pretty straightforward, right? And it’s very useful too. Scientists can check each other’s work and make sure that the original results weren’t a fluke (since we rely on probability and statistics to analyze data, we never “prove” anything with 100% certainty). Or they may show that the previous results only occur under very specific conditions.
Well, as valuable as direct replications are, scientists don’t always prioritize them. For one thing, many of us would prefer to work on new ideas that could advance the field beyond what we already know, rather than retracing someone else’s steps. It’s like how song covers are less common in the world of music; artists and bands mostly perform original songs. Scientists are rewarded greatly for innovative findings (I’ve never heard of someone winning a Nobel Prize for reproducing what others have already discovered). Furthermore, the public (including all you wonderful readers at Science of Relationships) is hungry for new findings as well. New discoveries make for exciting headlines, and perhaps for this reason, academic journal editors prefer not to publish direct replications.
However, developments in some other fields (e.g., biology and medicine) show that a staggering number of findings could not be directly replicated when scientists tried to reproduce them. Biomedical researchers found that only 6 out of 53 landmark studies in pre-clinical cancer research were replicated (meaning that 47 of the 53 studies could not be reproduced). This may seem like a big problem, but the truth is, it’s easily corrected. It doesn’t mean that we should discount advances in cancer research, but it does mean that we should devote more resources to, and place a higher priority on, replication studies.
There are a myriad of reasons why specific results may not replicate. It could be:
- The samples of participants were different in some notable way (e.g., geography or culture),
- Human error (e.g., mistakes in data collection/entry in the research team),
- Differences in the way scientists chose to analyze their data,
- Unreliable procedures or measures, or
- Most egregiously and most rare—scientific misconduct or fraud could be at play.
Remember, just because scientists fail to replicate a specific finding doesn’t mean that the original finding is false—it means we need to do further research to figure out why it wasn’t replicated. This requires more work on everyone’s part.
More recently, psychologists have been trying to replicate some classic “priming” studies (see more here and here on this topic) with mixed results; some replication attempts have been successful, while others have failed. Priming occurs when scientists activate a concept in people’s minds and that activation spreads throughout the mind and has effects on other thoughts, feelings, and behavior. Priming effects are often the most thrilling and eye-catching pieces of research, yet at the same time, they may also be the most fragile. For example, if I say the word “love,” then the word “marriage” becomes activated in your mind because those two concepts are related (in comparison to how much “love” leads to activation of an unrelated word like “lamp”). The problem is that priming effects are notoriously difficult to replicate given the nuances and intricacies of the human mind. This means that researchers may need to put in extra effort to test potential replications of priming effects.
As a researcher who has utilized priming methods, this concerns me, and as a scientist, I’ve been trained to be skeptical and critical even of my own work. I recently wrote about research I did with my colleague Markus Maier, using “attachment priming.” We included two studies in our paper to show consistent results, but the second study was not a direct replication of the first. In theory, other scientists should be able to use our procedures and get the same results independently. If someone else came along and failed to replicate my experiments, I would encourage the publication of those results, even though they would present a challenge to my original research, because that is how good science works and that is what makes our field strong. In the scientific community, we all collectively contribute toward a common goal of advancing knowledge. As my graduate advisor Everett Waters once said to me, “the world is always more interesting the way it really is than the way you want it to be.”
Fortunately, the tide is slowly (but noticeably) turning. A top journal in our field, Psychological Science, renowned for its brief and eye-catching studies, will be publishing a new article that reports results from two experiments that failed to replicate past findings. This is newsworthy because (as I mentioned before) journals typically don’t publish replications, regardless of the outcome. In this case, the original study1 showed that when anxiously attached individuals were primed to feel a relationship threat (e.g., thinking about a breakup) they were more likely to seek physical warmth as an “embodied” substitute for emotional warmth (e.g., preferring hot foods like soup as opposed to medium-temperature foods like chips). But other researchers (Etienne LeBel and Lorne Campbell) were not able to replicate that finding.2 What does this mean? Well, at a minimum, it means the original theory might be valid, but those specific effects need to be reconsidered. Perhaps food/temperature preferences don’t correspond directly to attachment anxiety the way we thought. Importantly, the original study author, Matthew Vess should be commended for his cooperation and open, cordial communication with the other researchers. This is a very encouraging development for relationship science and psychology as a whole, and is hopefully the beginning of a broader trend towards more direct replications.
I’m also thankful for role models and pioneers in the field, like Brian Nosek and Jeff Spies of UVA, who are spearheading the “Reproducibility Project” (and more broadly, the newly-founded Center for Open Science). These projects are designed in part to gauge the percentage of published findings in psychology that are reproducible, and in part to help researchers recognize where they can improve their scientific practices. Hopefully, this project will illuminate the issue and spur journal editors to publish more replication studies. I am a participating scientist on this project; my colleague Sean Mackinnin and my team of RAs (led by Elizabeth Chagnon) are working with me in an attempt to replicate results from a 2008 study on romantic attraction (I’ll let you know how those results turn out in a few months). The Reproducibility Project now has dozens of participating psychologists working on replication attempts worldwide. I hope that our collective efforts will help usher in a new and improved era of “scientific utopia.”
Interested in learning more about relationships? Click here for other topics on Science of Relationships. Like us on Facebook or follow us on Twitter to get our articles delivered directly to your NewsFeed.
1Vess, M (2012). Warm thoughts: Attachment anxiety and sensitivity to temperature cues. Psychological Science, 23, 472-474.
2LeBel, E. P., & Campbell, L. (in press). Heightened sensitivity to temperature cues in highly anxiously attached individuals: Real or elusive phenomenon? Psychological Science.
Dr. Dylan Selterman – Science of Relationships articles | Website/CV
Dr. Selterman’s research focuses on secure vs. insecure personality in relationships. He studies how people dream about their partners (and alternatives), and how dreams influence behavior. In addition, Dr. Selterman studies secure base support in couples, jealousy, morality, and autobiographical memory.