In our quest to be taken seriously, college kids are big fans of talking about what “a study said” or what “research showed.” We’re like anyone in that we are drawn to surprising or groundbreaking results in academic research, and many of us also hope to contribute to findings in academia in our futures as well.
But every now and again, we are reminded of the ways research findings can be misrepresented or biased (this month, it was through a New York Times report on biases that come with favoring publication of positive results only). Problems with replication studies and incentives for positive results are things that researchers in academia confront all the time, and undergraduates—who are in many cases researchers-in-training—can practice understanding what to do about it.
Concerns have been raised plenty of times in the past about the infrequency of replication studies of major findings, or of studies that are unable to reproduce the same results of previous work. Biotech companies like Amgen that have done major replication studies only find the same results a fraction of the time, and widely-published psychology studies (like the one that claimed children who could resist eating a marshmallow for a bigger reward later scored higher on the SAT when they were older) have been found less statistically significant when reproduced.
The expectation for researchers to churn out positive results and significant findings is a major part of the problem—“outcome reporting bias” leads positive results to be published at a higher rate than negative ones (you don’t often see the results of a study that found nothing, even if that finding can be meaningful), and “citation bias” leads these positive results to be more widely cited and shared. Researchers needing to keep their careers afloat thus face a steep incentive to find these positive results. This can lead dangerously to “p-hacking,” in which data are selected or manipulated to ensure that results are statistically significant.
Our interest in and spreading of flashy study findings plays into this—the average person wants to read about surprising new results rather than hearing that a study went nowhere (which, again, can actually be just as interesting). The general public showing more interest in negative results or studies that shows minimal correlation of data could help sway academic pressure away from positive results.
The Wall Street Journal has also covered the idea of undergraduates helping with replication studies, a promising—if limited—approach to the problem. Specifically in the field of psychology, undergraduates could try out those replication studies that are financially and practically feasible in research methods classes. This could help confirm or cast down on findings in the field and would also include replication as a necessary part of methods education.
Most published studies will not be realistic for undergraduates to try to reproduce, but that does not mean there’s nothing to be done at this level of education. Using studies with negative results as class examples and teaching proper statistical analysis (why you shouldn’t p-hack 101) will help this next generation of researchers.
Our fundamental understanding of many topics comes from the findings of academics, and we get into dangerous territory when those findings can’t be reproduced or when positive results are given much more attention. We each play a role in perpetuating bias when we over-sensationalize major findings, but if universities are thorough in their research methods teaching and if the general population is more open to negative results, we can start to overcome it.