I was going to write about how World-War-II-era soldiers that went missing in action can be identified today using DNA technology. Since I am waiting on a bit of fact checking to come through, and since an interesting article landed in my mailbox today, I thought I’d take the opportunity to fill the void with musings on transparent garbage bags.
Are they cheaper than opaque ones? I never noticed. It may be a “nurture” thing. My family always used opaque garbage bags so I naturally gravitate toward those to this day. I often see transparent ones on the curb and find them somewhat revolting. You can see everything, can’t you? And it ain’t pretty.
It has been a little over two years since a group at Bayer published a scientific paper in Nature Reviews Drug Discovery about the lack of reproducibility of potential drug target research and halted nearly two-thirds of its target-validation projects1,2. Amgen published similar findings3. These wake-up calls, once a shameful secret amongst researchers, are now trickling down into mainstream media. The Economist published earlier this month a fantastic article about how the self-correcting mechanism of science is often exaggerated4. Now the LA Times has released a similar article to its readers, once again painting a not-so-enthusiastic portrait of modern biomedical research as a costly fool’s errand5. The garbage bag of science has now shed its opacity and everyone can spot the crap.
I am writing this article mainly to non-scientists who are wondering how we let all this happen. The reasons for the current situation are multifactorial.
Scientists are first and foremost human beings, not stern applicators of the scientific method. They have egos, they enjoy wielding power, they may have difficulty admitting wrongs: they are fallible. They have careers to keep on track, families to feed and clothe, and aspirations to concretize. They have superiors to answer to, collaborators to please, employees to pay, and students to tutor. Just like the most depraved human being didn’t start his career by murdering a convent, scientists may be tempted to remove an “outlier” on a grant application one day… only to move on to claiming an experiment has already been done when it’s scheduled for next week… to turning a blind eye when their research technician, desperate for a job and a stay in a more humane country, asks them what they expect the result to look like.
My generation was told to go to university because undergraduate and graduate studies were no longer the purview of the elite. Anyone could go to university as long as one worked hard. So we went in droves, creating a shortage of tradespeople. In the wake of this influx of university graduates, the doctoral degree lost its value and became commonplace. To become a principal investigator and open your own research laboratory, you need a post-doctoral fellowship under your belt… or two. Soon, maybe three.
This has created a large number of research groups competing for very little money. And what do humans tend to do in a highly competitive environment? Ask Lance Armstrong. They cut corners. They cheat. It’s all for the good of their research project, you see. We really are on the right track, but this grant application is due today and, well, we could really use this figure for which we don’t have data yet… maybe we’ll make something up. I mean, we’re pretty sure what the results of this experiment are going to be anyway, so what’s the harm? Beyond the occasional cheat, most scientific groups are inexperienced at large data analysis. Biostatisticians are all too rare, often leading to untrained students to conduct complex analyses in the dark. Given the many different ways in which a data set can be analyzed, a positive signal of some sort is bound to come out of this blind massage.
Scientific publications are the currency of researchers worldwide. Those of us in the scientific world have all heard the mantra “publish or perish”. Good science usually takes time, but the current climate is not kind to careful methodologies and reproducibility experiments. As soon as you have something half decent, you must publish it or risk being scooped by the dozens of laboratories that are probably working on that very hypothesis of yours. And if you get scooped, your data becomes nearly meaningless. It’s like arriving on top of Mount Everest with your country’s flag only to find Sir Edmund Hillary taking pictures of the view. You no longer discovered this; they did and they will get the funding to pursue this research because they were first. And you know they may have cut corners to get there first….
I’m glad that this reality is finally being communicated to the public which, in large part, is responsible for funding it. The problem has become an incredibly sticky gum in the works of scientific progress. We keep heralding science as being self-correcting, but that’s now like saying that a slow clock is “self-correcting” because it’s accurate one minute out of every day. It is high time that the scientific institutions which have silently accepted and sometimes even facilitated this lack of reproducibility enact change. Maybe shaming them in the public arena will provide the right incentive to start changing the culture.
There is currently no “carrot” to publish the results of reproducibility experiments. Most of these results, I am sure, are swept under the carpet when they disagree with the published literature: there is no time to waste putting together an article that may disprove another paper if the project is not going anywhere. Don’t get me wrong: if a scientist disproves a known fact and comes up with a new and interesting hypothesis that can be carried forward, he or she will publish. But if an interesting finding simply cannot be reproduced, most scientists move on. Publications need to be exciting. They need to be novel and game-changing. Hype is no longer the sole domain of the entertainment industry. It is the unspoken goal of contemporary science.
How can we encourage the publication of negative results and reproducibility experiments? Steps are being taken to remedy the situation by creating journals (e.g. The Journal of Negative Results in Biomedicine), projects (e.g. The Reproducibility Initiative) and centres (e.g. The Center for Open Science) dedicated to these long-ignored pillars of the scientific endeavour. Perhaps with enough shaming from the public, these initiatives will be taken more seriously. As it stands, the (unofficial) impact factor of The Journal of Negative Results in Biomedicine is 1.15; compare that to Nature’s 38.597 and you’ll understand that only the fiercest defender of negative result publication will currently put in the effort to publish this data in a dedicated journal.
All this negativity toward science in mainstream media could back-fire. It is very easy for members of the public to see this shameful reality and toss the entire scientific enterprise in the “broken” bin. This paves the way for charlatans and peddlers of nonsense to come in with their homeopathic remedies and energy crystals and “natural” concoctions. This is not a desirable outcome. Despite the all-too-human practice of science, we must not lose sight of the fact that science is still, by far, the best system of knowledge gathering and integration we have. There is nothing wrong with the scientific method, but there is much wrong with the way in which it has been used by researchers of late. The principles are sound; the practices are deficient.
These scientific inaccuracies are often the result of a system which pressures scientists in excelling beyond reasonable expectations. We need to fix the system. Maybe we’re due a public shaming.
(Feature picture by bruckerrlb)
1. Prinz F, Schlange T, Asadullah K. 2011. “Believe it or not: how much can we rely on published data on potential drug targets.” Nat Rev Drug Discov 10(9):712. Available at http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html.
2. Brian Owens for Nature News Blogs. “Reliability of ‘new drug target’ claims called into question.” Accessed November 1, 2013. http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html.
3. Begley CG, Ellis LM. 2012. “Drug development: Raise standards for preclinical cancer research.” Nature 483(7391):531-3. Behind pay wall.
4. The Economist. “Unreliable research: Trouble at the lab.” Accessed November 1, 2013. http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble.
5. Los Angeles Times. “Science has lost its way, at a big cost to humanity.” Accessed November 1, 2013. http://www.latimes.com/business/la-fi-hiltzik-20131027,0,1228881.column#axzz2jQlvwdWe.
Pingback: eLife: Faster Is Not Always Better | Cracked Science
Pingback: The Time Machine: Infections, Inductions, Ineffectualness, and Irreproducibility | Cracked Science