Written by Dagny Yenko
Illustration by Sophia Dumlao
Published 2020 December 21
While scrolling through my Twitter feed one day, a circulating thread by user @JMRLudan about research expectations in science high schools echoed the sentiments of many students, myself included. In my entire duration in Philippine Science High School, classes dedicated to STEM research had always been compulsory. For three years, I went through the full-blown experience multiple times — from proposals, manuscripts, never-ending laboratory trips, data analysis, as well as the many levels of revisions and panel defenses. Trust me when I say that it is a horrible ordeal for a teenager to go through. Alarmingly, such is not an isolated case, if we were to go by the many who resonated with the tweet. However, it is not the research that burgeoned my resentment towards it.
First, we must consider why the education system pushes for research outputs. The “publish or perish” culture — the pressure born from the equation of success to the publication of academic work — is incredibly prevalent. How can it not be when most universities judge an individual’s merit by the number of their publications? On top of that, university rankings are heavily based on institutions’ research outputs, which earn them prestige and reputation.[3, 4] In fact, amidst the desperate call of the students and faculty to end the semester, the University of the Philippines proudly announced that they had placed 69th in Quacquarelli Symonds’ latest Asia University Rankings last November 25. Nice, but beyond the prestige and reputation, I beg the question — why does it matter? Because of this pressure to publish, systems are inclined to value quantity over quality.
It is easy to say that quantity and quality are not mutually exclusive, but we must also consider what characteristics a research needs to be published. Since high school, we have been pressed to produce “positive results”. The outcome must be desirable, and this often equates to statistically significant results which are likely to be cited due to being “more interesting and readable”. Otherwise, your paper might never get published. In my high school, batchmates were urged to redo their process countless times just to attain those coveted positive results. This inclination is publication bias. It breeds data manipulation and dishonesty, causing authors to alter their data to present positive results. After all, they want to be published and recognized by universities.
That is not all, however. There also exists the bias towards novel research over replicated studies. My sister, a non-STEM high school student, was recently required to propose an innovative research. Every proposal that “had already been done” was rejected. Apart from how ridiculous this demand is in a quarantine setup, it is vastly ignorant of the importance of replication. To be able to redo a study verifies its integrity — that the results are true and free from errors and that nothing is merely coincidental. Unfortunately, these are not quite citable, as journals favor the innovative, and the institutions follow.
Hence, the replication crisis. In 2015, Open Science Collaboration failed to replicate a good chunk of a hundred studies from three psychology journals, thus sparking the question of validity of peer-reviewed research. As other laboratories across various fields followed suit, it was found that countless studies failed the replication test as well. Many supposedly established facts were then questioned. If there exists only a few studies on a particular topic, can one really say it is true?
The non-publication of negative results and replicated studies poses various consequences in the scientific pursuit of truth. Authors may squander time and resources testing a hypothesis that already yielded negative results in an unpublished study, and they will presumably arrive at negative conclusions as well. However, with the current research culture, none of these will be reported. Through the pressure of a positive bias, possibly erroneous conclusions could be drawn, and with the lack of replication, they may not be authenticated. How, then, are we to arrive at the truth, the very purpose of conducting research, if publications and institutions continue to cultivate publish-or-perish and favor positive and innovative studies?
So we have woven the system, or at least a big portion of it, that hinders the scientific community from truly moving forward. What now? As a researcher, it might be insightful to look into the slow movement in which “science needs time,” encouraging scientists to think, to read, to fail, and to develop at their own pace.[9, 10, 11] This recognizes that slowing down means ensuring the veracity of data as opposed to fast, hot topics like magic pills, diet fads, and parallel universes that may not even be accurate or reliable. We must not give in to the pressure of the publication bias. There are journals and organizations making an effort to dispel these biases, such as Biochemia Medica and the Open Science Foundation (OSF).[6, 8] We must urge our institutions to quit the culture of competition. We must propagate the idea that university rankings and judgment of an academic’s merit by the number of publications does not contribute much in the grand scheme of greater interest. Think about it — what is research for and who does it serve? Certainly not the needlessly fierce competition among universities and journal publications.
- Ludan. research in pisay is actually broken. I don’t know why they expect literal teenagers to come up with “novel” research ideas during a time period when they’re still learning the basics of their field of interest. ->thread [Internet]. 2020 Nov 11 [cited 2020 Nov 26]; Available from: https://twitter.com/JMRLudan/status/1326394717798543365
- Rawat, S, Meena, S. Publish or perish: Where are we heading?. J Res Med Sci. 2014 Feb;19(2):87-89.
- Altbach, PG, Hzelkorn E. Why most universities should quit the rankings game. University World News [Internet]. 2017 Jan 8 [cited 2020 Nov 26]. Available from https://www.universityworldnews.com/post.php?story=20170105122700949
- Teferra, D. Tempest in the Rankings Teapot. Inside Higher Ed [Internet]. 2017 Jun 25 [cited 2020 Nov 26]. Available from: https://www.insidehighered.com/blogs/world-view/tempest-rankings-teapot
- Lontoc, JFB. UP is Asia’s 69th best, 52nd in academic reputation—QS. University of the Philippines [Internet]. 2020 Nov 25 [cited 2020 Nov 26]. Available from: https://www.up.edu.ph/up-is-asias-69th-best-52nd-in-academic-reputation-qs/
- Mlinarić, A, Horvat, M, Smolčić, VS. Dealing with the positive publication bias: Why you should really publish your negative results. Biochem Med [Internet]. 2017 Oct 15 [cited 2020 Nov 27];27(3). Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5696751/ DOI: 10.11613/BM.2017.030201
- Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015 Aug;349(6251):aac4716. DOI: 10.1126/science.aac4716
- Woodell, A. Leaning into the replication crisis: Why you should consider conducting replication research. American Psychology Association [Internet]. 2020 March [cited 2020 Nov 28]. Available from: https://www.apa.org/ed/precollege/psn/2020/03/replication-crisis
- Footprint Choices. The Slow Movement: Making a Connection [Internet]. Australia: Footprint Choices; 2011 [cited 2020 Dec 1]. Available from: https://www.slowmovement.com/
- The World Institute of Slowness. The World Institute of Slowness [Internet]. 2018 [cited 2020 Dec 1]. Available from: https://www.theworldinstituteofslowness.com/
- The Slow Science Academy. SLOW-SCIENCE.org [Internet]. Berlin, Germany: The Slow Science Academy; 2010 [cited 2020 Dec 1]. Available from: http://slow-science.org/