After the experiments are designed, the hypotheses tested, and the data analyzed, a lucky minority of research projects culminate in a manuscript that their creators deem worthy of submission for publication. How does a scientific manuscript then become a published scientific paper?
Science, like politics, is an all-too human undertaking. It should come as no surprise that the process that transforms a manuscript into a published paper is about as convoluted as the one that turns a bill into a law. The review process for scientific manuscripts is consequential for the people and the science involved. The decisions that lead to publication or non-publication of manuscripts control the careers of scientists, and indirectly, the kind of scientific questions that will be asked in the future.
Let’s take as an instructive example a manuscript that I helped write with a collaborating colleague that includes data from experiments I did while I was a graduate student in David (Davo) Mangelsdorf’s lab at UT Southwestern. The manuscript is in many ways typical of today’s research reports. It includes data obtained using a variety of techniques (x-ray crystallography, computational modeling of structures, cellular and molecular functional assays, etc.) and combines the efforts of two groups with complementary sets of technical expertise. I have a copy of it sitting in front of me. There are about 45 pages in total, 6 figures, and 7 supplementary figures. The figures are a mixture of pictures of protein structures, bar graphs and line graphs.
Together, the data tell a story about how a certain type of protein is able to recognize a particular small molecule (like a drug, for example) at one place in its 3-dimensional structure and respond to that binding event by changing its shape and functional properties at a far-away part of its structure. What makes this even more complicated is that closely related but different members of the protein family (small molecule receptors that are also transcription factors called nuclear receptors) can bind together in pairs and perform this trick in tandem. One protein in the pair can bind to its small molecule and signal its partner receptor to change its shape and function. This is an example of a fascinating phenomenon called allostery. I worked to better understand allostery in nuclear receptors, the subject of my grad school doctoral thesis. Our manuscript extends the understanding of the mechanistic details of how the molecular action-at-a-distance takes place. I’ll say right now, the results are not earth shattering. It is solid work, the data moves the field forward and would be of interest to academic and drug industry scientists who are trying to develop better drugs that bind to this class of receptors (drugs related to steroid hormones are important tools in the treatment of many diseases).
I met my collaborator, let’s call him C, at a Keystone Conference in Utah. We were lucky in that the nuclear receptor field is small and friendly. Our big biannual meeting was always held at a western ski resort in peek early spring season. My first day on skis was with my boss and his post-doc boss, Ron Evans, the father of the field who will be a Nobel Laureate someday soon. Another one of the big shots was based at the Karolinska in Stockholm and would always show up with at least a dozen gorgeous, blond students and post-docs who loved disco. Suffice it to say, it was a fun meeting. C and I shared an interest in this allostery phenomenon and had a lot to discuss over our poster presentations and beers. Two years later, when the next edition of the conference came around in Colorado, we had both completed our projects and miraculously had seen them published – mine in Cell (after a rejection from Nature). We were lucky to be two of the very few students who were invited to give short talks at the meeting. Those 15 minutes were highlights of my time in the lab. Based on discussions at that meeting and over email, C suggested a number of experiments that I could do with the system I developed that would allow us to further extend our results. This sounds like exactly the kind of thing that a couple of energetic young scientists should be doing, right?
I did the experiments that we talked about in the spring of 2004. They were some of the last experiments I did before hanging up my pipettes to write my dissertation and return to the hospital for my third year of medical school. The result was significant: the normal (wild-type) receptor could be activated by its drug by 10-fold but the receptor that I had altered based on C’s prediction was activated 155-fold. In biology, one does not always get results that so clearly indicate that something potentially interesting is going on. I did some additional experiments to clean up the finding. C, now working in his own lab he had just started as an Assistant Professor at a new research institute, did lots of computer analysis and biochemical work to come up with a good model to explain our result. C wrote the manuscript, I edited it with some helpful comments from Davo, and in September of 2005 it was ready to be submitted for publication.
Which journal to send it to? This is a key decision that academics obsess over while pulling out their remaining hair. Aim too high in journal prestige (a formerly abstract concept now made quantifiable by a rubric of how frequently the journal’s articles are cited known as the “impact factor”) and the paper may be rejected, resulting in a costly time delay that could allow competitors to gain ground. Aim too low and the work will not give CVs the pop needed for grants and promotions. Thinking wishfully, we sent the manuscript to Cell mostly because the prior work had been deemed worthy there. A week later we were pleased to learn that we had made the first cut. The editor handling the submission thought that it was promising enough to send to three scientists working in our field for their review (hence the term “peer review”). This was no small victory since the majority of papers are rejected without being reviewed. The more prestigious the journal, the more “general interest” is required of potential papers. Papers that editors feel do not meet the “interest” threshold are returned to sender.
A month later, we got our rejection notice from Cell including the comments of the anonymous reviewers. One reviewer took the trouble to question technical details of the experiments. The other two advised rejection because they thought the work would be better suited to a more specialized journal (read here lower level of general interest, less prestige, lower impact factor). In the form-letter language of the editor, the reviewers “questioned whether the overall conclusions rise to the level of conceptual advance that would be required for publication here” (maybe you’ve received this letter before?).
In emails to the manuscript’s ten or so coauthors, C expressed irritation at the reviewers but did his best to stay upbeat. He laid out a plan to do additional experiments that would boost the paper’s “general interest” and address the technical issues. The plan was ambitious and would amount to more than just a month or two of additional work. Since I had already left the lab and was back to medical school, I suggested that he simply reformat the work and try again at a second tier journal. C was convinced that with more work, the paper would be Cell material. I did not hear from him until May of 2008. I was now living in Boston and in my last couple of months as a pediatrics resident. C sent an email with a spiffed up manuscript and a request for comments in the coming week before he submitted the paper, again to Cell. I thought that another try at Cell was overly ambitious and unlikely to work but figured that the manuscript might find a home at Molecular Cell, Cell’s strong second-tier cousin. The paper was again rejected, with similar reviewer comments and a nearly identical letter from the editor. I figured that we were done. I occasionally thought of C, how his lab was doing, and whether he was continuing to work on our project.
This February, a note from C popped up in my inbox with the title “Return of the Living Zombie Paper.” I was shocked, and a little bit excited, that the paper might finally see the light of day. Again, there were additional experiments that C’s team had done to strengthen the work and it now included more easy to follow diagrams. This time it was submitted to a second tier journal in the Nature family, Nature Structure and Molecular Biology. I had not heard from C for almost two months after the submission and just wrote him to follow up. He told me that he felt so depressed after receiving the rejection notice that he could not bring himself to let the coauthors know.
What can we learn from this seven year-long and still unfinished saga? The problem in this story is not that the paper was rejected, rejection will happen. The problem is why the paper was repeatedly rejected – not because the experiments were poorly conceived or executed but because the work did not have sufficient “general interest” to warrant publication in the most desirable journals. C’s failure, and mine, was not a scientific one but a failure in persuading reviewers and editors to agree with our belief that many scientists, even those working outside of our small field, would find our work interesting. With his first tenure review looming, C needs publications in well-regarded journals. Having the work appear in its current form in a third-tier publication will not improve his prospects for promotion. The pressures of the academic system force investigators to bang their heads against the “general interest” wall. So the work remains unpublished and unknown to other scientists who could potentially use what we learned in their own work. Who knows, maybe someone is out there depleting the precious resources of time and money (most of it from taxpayers) doing exactly what we already did.
The scientific review process is desperately in need of reform. This is obvious to young scientists toiling in a system where their arch-competitors are also enthroned as the adjudicators of their work. Science is a big-time power protect system and when the powerful start to criticize the review process, you can be sure that it is badly broken. An opinion piece in the April 28th issue of Nature by Hidde Ploegh, a prominent immunologist at MIT (I interviewed for a post-doc with him while I was in Boston), entitled “End the wasteful tyranny of reviewer experiments” has received much attention. Reviewers frequently ask authors to perform additional experiments to put the work over the “general interest” bar. Often the additional experiments would do little to extend the conclusions of the work. Sometimes the requested work is large in scope and would require months or even years of further experimentation. At best, this practice could be viewed as reviewers trying to improve the quality of the work in question. At its worse, this is reviewers trying to place banana peels in their competitors’ road to publication glory. Ploegh is absolutely correct in arguing that reviewers should give a simple yes or no vote. Although this would certainly improve the process, I agree with many others, including Alex Kentsis, a friend of Left on Longwood, in arguing that this does not go nearly far enough.
Some suggestions: 1) End the practice of anonymous review. Reviewers know the authors whose work they are examining but authors do not get to find out who reviewed them. Since science is way too vested in power protect mode to go to a double-blind system where reviewers would not know the identity of the authors, let reviewers no longer wield power behind a cloak of anonymity. 2) Publish the names of the reviewers and make the text of their comments available to readers. This is a major innovation that has been pioneered by journals such as Biology Direct. It makes reviewers accountable to the community, rewards thoughtful reviewers for their efforts publicly, and gives the reader a sense of what the give and take between authors and reviewers was like. 3) Create guidelines for students and post-docs reviewing manuscripts. Busy lab heads often ask their graduate students and fellows to review papers. As you can imagine, the level of supervision is often minimal. Taking the first crack at a review was a great experience for me as a graduate student. However, if the lab boss does not have the time to give the review some attention, they should decline the journal’s request to review.
Better yet, as I learned from David Romps, another friend of Left on Longwood, biomedical science should learn a lesson from our more enlightened colleagues in physics and mathematics. Scientists working in these fields post completed work on a common website called arXiv.org, which is maintained by Cornell. The work is freely available to all the instant it is posted. Readers can make comments, in effect, contributing to a running review of the work. The custom is that posting a manuscript on arXiv counts in establishing the work’s priority. As a result, there is no fear of being scooped by one’s competitors tipped off by a posted manuscript. The work, quite possibly refined by the informal review process on arXiv, can then be submitted for publication at one of the field’s more traditional journals.
There you have it all: transparency, speed in dissemination, an open forum review process to improve the work. arXiv looks like what a friendly community of scholars should be able to come together to create in the age of the interweb. When will we see reform of the review process on this scale in the cut-throat world of biomedicine? Seven years will not be long enough to wait.