Imagine doing a science experiment in an interrogation room while facing a two-way mirror. The people on the other side know who you are, but you have no idea who is judging you.
This, in a nutshell, is how scientific articles have traditionally been reviewed in the fields of molecular biology and genetics prior to publication. Your peers receive a copy of your manuscript that includes the names of every author as well as their affiliations; when you receive their comments, however, they are only known as “Reviewer #1”, “Reviewer #2”, and “Reviewer #3”. Some principal investigators may engage in Holmesian inductions to try and identify the reviewer who is asking them to redo an entire six-month experiment because it lacked replication, but these reviewers are protected by a blanket of anonymity that is usually hard to pull back.
Their namelessness is not the issue of this article. Rather, I recently learned that certain fields, like psychology, have long pushed this idea even further by anonymizing the research group to their reviewers. This process is known as “double-blinded peer review”. This is similar to the methodology behind placebo-controlled, double-blinded, randomized clinical trials, the gold standard in evidence-based medicine, in which neither the treating clinician nor the patient know what treatment is being administered.
The theoretical advantage of double-blinded peer review is that famous scientists and research groups in emerging countries alike are judged on the same footing, but does the process really make a difference? I found an interesting article published in 2006 in the Journal of the American Medical Association (JAMA) which tested this methodology.
Every year, the AMA holds a scientific sessions research meeting in which medical scientists from all over the world come to share their most recent results and to potentially establish new research partnerships. In order to be allowed to present (either as a commoner with a cardboard poster or as near-royalty with an oral presentation), one has to submit an abstract of one’s findings to the AMA Annual Meeting committee. This abstract is a short paragraph summarizing why the research was pursued, how it was done, and what was found. An example from a publication of mine can be found here.
How is an abstract judged to be worthy of acceptance to the AMA meeting? One would hope on the strength of its research alone, but it is all-too-easy to start questioning whether the inherent prestige of the affiliated institution (e.g. Yale University versus the University of Botswana) and that of its lead author (e.g. Francis Collins versus… well, me, for example) would bias the committee.
Luckily for the authors of the JAMA paper, the AMA switched to a double-blinded review system between 2001 and 2002. The authors thus looked at what had been accepted between 2000-2001 and 2002-2004. Did blinding reviewers to who was submitting these abstracts influence who got in?
Perhaps the most shocking number comes from institutional prestige. When reviewers knew where the authors conducted their research, the rate of acceptance for abstracts coming out of highly prestigious institutions was 51.3%. When they were blinded to this, the percentage dropped to 38.8%.
(Prestige was calculated using “a composite score based on the mean monetary value of research and training grants and contracts funded by the National Institutes of Health (NIH) for fiscal years 2000 through 2004 and the mean ‘heart and heart surgery’ hospital rankings by US News & World Report from 2000 through 2004”.)
Interestingly, the authors did not find evidence of a gender bias among U.S. authors: female American authors were just as likely to have their abstract accepted as their male counterparts regardless of the review process used. But did the old U.S. of A. give authors a bit of an edge in the competition? Indeed, when an American affiliation was known to the reviewers, abstracts were accepted 40.8% of the time; when this affiliation was unknown, the number dropped to 33.4%.
Has double-blinded peer review been adopted widely by biomedical journals since then? While I could not find numbers on this, the publishing behemoth Nature very recently dipped a toe in the double-blinded peer review pool. It is now allowing authors to opt into such a system for two of their journals, Nature Geoscience and Nature Climate Change.
The main argument against investing efforts into double-blindness is that a reviewer could accurately guess who the main author is by the type of research being done and the references quoted in the paper. If a research group previously published a paper on a new form of muscular dystrophy only found in their neck of the woods, a second paper identifying the gene responsible for it would probably come from the same group. This is a case when a logical induction is almost promoted to the level of a deduction.
However, this argument is predicated on the assumption that reviewers care about such things. Yes, a double-blinded system of review is not perfect: the old war horses will probably figure out whose paper they are reviewing at least half of the time. But if it can dampen the bias one develops from merely glancing at the words “Harvard University” or “Eric Kandel” (and the JAMA paper shows that it can indeed make a difference), then I would argue it is the responsibility of every scientific journal to investigate how it can move from a single-blinded peer-review process to one that is a little less imperfect.