The information in the following article was obtained from an in-person interview conducted by the author with Dr. Brian Nosek at the University of Virginia, co-founder / executive director for the Center for Open Science.
In the past 30 years, researchers have published more than half of all of the scientific publications in existence. The majority of these papers are overwhelmingly biased towards novel and positive results. To the casual reader or average citizen, this could be interpreted as a massive victory for humanity’s improved understanding of the natural phenomena around us. To many researchers, including Dr. Brian Nosek at the University of Virginia, the massive influx of scientific papers in recent history spells a more troubling story.
The vast majority of researchers agree that scientific research in all fields has the goal of being transparent and trustworthy. Yet, the scientific method is not always aligned with widespread scientific practice. Unfortunately, how the scientific community thinks science should operate isn’t necessarily how scientists and researchers are incentivized to behave in order to survive and thrive in their work. According to Dr. Nosek, two core assumptions have always existed for those practicing “good science:” 1) The science is based on the evidence to make the claim (i.e. the data is sufficient); and 2) The scientific process is able to be reproduced with similar results in order to maintain the validity of the claim. These core things are what make science, science. But that’s not what researchers do all the time.
The problem is that the scientific community at large has created a dangerous culture of prioritizing unique and groundbreaking results over negative results with a competent research method. Publications are the currency of researchers, allowing them to apply and obtain grants to publish even more studies. New and novel results can springboard a researcher’s career (ex. receiving tenure) and as it turns out, journal editors are looking for the next sensation or “big thing,” but since not all experiments turn out as hypothesized, there is a rush for researchers to publish frequently in order to finally catch their big break. Some researchers even resort to “p-hacking,” a post-hoc data mining to uncover patterns in the data that can be presented as statistically significant without first devising a hypothesis for the proposed correlation. Authors can even be so desperate as to seek out fake and discredited journals and pay a fee to have their research published, a sign that the situation is in danger of spiraling out of control.
Consequently, there is no personal incentive for a researcher to share his data or process. There is no incentive to be transparent, which is a necessity for the collective advancement of knowledge. There is no incentive to waste time publishing negative results or replicative studies, even if that information would save other researchers time, money, and resources (a phenomenon called the “File Drawer Effect”). Instead, the selfish goal of research is to find something new and exciting. Even if the researcher wanted to do what was morally right and collectively beneficial, that action of publishing null results or spending the time to improve transparency would only hurt his or her career. On the other hand, journals also have the same incentive pressures. Top scientific journals operate and survive under a single metric: impact factor. Editors and publishers want to maximize their journal’s prestige, relevance, and number of subscriptions. To take the moral high ground would be to sacrifice valuable space in the journal and not necessarily publish eye-catching results.
The scientific community is responsible for creating this culture and we are responsible for shifting the norms that we have inadvertently done for ourselves. In an effort to engineer a culture change, each party has to take incremental steps towards increasing the legitimacy of science in order to avoid inordinate risk. In a coordinated effort, some journals are beginning to offer “badges” of recognition to studies that have undergone registration of procedures and hypotheses before conducting the experiment to dissuade researchers from “p-hacking.” These studies are entered into a public access database, regardless of whether they are ultimately published or not, allowing other researchers access to null and replicative studies that may not have made the cut for print. Other journals have changed their publishing criteria to not include the nature of the results (positive or negative), instead assessing on the quality of the methods and procedure. Even further than that, some journals such as eLife or Nature: Human Behavior have begun to offer registered reports – promises to publish a study regardless of the outcome given that the scientific method is sound.
These changes are encouraging and bring the focus back to methodology and the quality of the research being performed. While there is concern that these more “lenient” journals create a lower tier publications, there is a lot of hope that more prestigious journals like Science or Nature will follow suit and discover that there is no loss taken from these decisions. Additionally, organizations such as the Center for Open Science based in Charlottesville, VA conduct research on the causes and extent of these issues, devising interventions and incentives for journals, researchers, and the public to get involved in making research more transparent. Founded by Dr. Nosek and funded by the NIH, NSF, DARPA, and many other backers, the Center for Open Science also works to train and educate the next generation of researchers on the importance of becoming “incremental revolutionaries,” offering educational initiatives in the form of grants to young researchers who follow the protocol for public access and registered studies. This effort to restore credibility to research is essential.
For those familiar with scientific research, the concept of having results that contradict each other is not a fear. There will always be false-positive results. However, to the general public, there can easily be a loss of faith in science. A lack of trust that leaves the door open for entities such as the tobacco industry, “anti-vaxxers,” or climate change deniers to point at a few minority studies and scream “The science isn’t in yet!” to justify detrimental behaviors. This problem is further exacerbated due to the dominance of social media as a form of news, with the majority of readers neglecting to fact-check or even read the whole article. As humans, we are drawn to and “hooks” of headlines that exaggerate the research, but to which we identify as fact. If the scientific community cannot resolve the reliability issue of its publications, the situation will only stoke the flames of skeptics and reduce the integrity of science as a whole. Research is conducted in a real effort to find what is true and obtaining contradicting results is simply part of the process. The key thing the scientific community has to do is communicate to the general public why it has to be that way.
A key part of science is trust. This is not a basis for the distress or crisis of science – rather, this is what is great about science. Science does not progress by proving things to be true. Science progresses by reducing error and uncertainty with the fundamental understanding that our knowledge of the world is incomplete and perhaps even incorrect. Confronting uncertainty and reducing them to increase our confidence in these paradigms is the best that science can do. No paper in science has the complete “truth,” but from scientist to scientist and from scientist to the public, a demand for transparency, public access, and a willingness to change needs to happen in order to restore faith in the scientific process.
References:
- Nosek, Brian, Dr. "Publication Ethics." Personal interview. 15 Feb. 2017.