You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
Go to headerGo to navigationGo to searchGo to contentsGo to footer
In content section. Select this link to jump to navigation

What happens to research quality when we change the peer review and the research publishing model?1

Research publishing is changing, and quickly. New models of peer review are emerging and now coexist, with new business models to support them (like the Projekt DEAL and Wiley announcement [1]). This means that complex choices characterize even the most traditional and conservative things in research publishing, like peer review. This article raises questions about how researchers and publishers, working in the right sorts of collaboration, can maintain essential aspects of quality – namely integrity and ethics. It discusses how publishers can deliver better peer review [2] and, together, how they can find new kinds of value for researchers.

Two essential aspects of quality: Integrity and ethics

Peer review, as part of the peer reviewed publishing process [3], is how way we manage two essential aspects of quality in research and research publishing: Integrity, and ethics. By integrity, we mean the reliability, reproducibility [4], trustworthiness and usefulness of published research. By ethics we mean the regulated ethical requirements for doing research (human and animal research in particular [5,6]), as well as equally important community-led obligations (like authorship practices [7]), and how these are reflected and reported in published research.

How are we doing with that?

Often, people look to retractions as a marker for quality in research integrity and publishing ethics. Retractions [8], for the uninitiated, are formal withdrawals of research articles, published when something is significantly wrong with the integrity or the ethics a piece of research. Retractions can be for honest errors [9], or for research misconduct [10], or for something in-between [11]. Jeffrey Brainard published an analysis of records from the world’s largest retraction database, titled rethinking retractions in Science [12]. While Brainard reports that the numbers of retractions grew 10-fold in the years between 2000 and 2014 (Fig. 1), he also reminds us that the total number is actually low (maybe 4 in every 10,000 articles published) and the number of articles published is also growing (doubling over a similar period). About 40% of the retractions Brainard studied reported honest errors, problems with reproducibility, and other issues. The remainder were for our “something in-between” questionable research practices [11], or for misconduct. Brainard quotes Nick Steneck (University of Michigan in Ann Arbor) saying “Retractions have increased because editorial practices are improving, and journals are trying to encourage editors to take retractions seriously.”

Fig. 1.

Chris Graf discusses Jeff Brainard’s study of retractions, Rethinking Retractions, in Science http://science.sciencemag.org/content/362/6413/390 [12]. The image is from Twitter @MatthewAHayes1 https://twitter.com/MatthewAHayes1/status/1085538549141815298.

Chris Graf discusses Jeff Brainard’s study of retractions, Rethinking Retractions, in Science http://science.sciencemag.org/content/362/6413/390 [12]. The image is from Twitter @MatthewAHayes1 https://twitter.com/MatthewAHayes1/status/1085538549141815298.

We have evolving and completely new peer reviewed publishing models

Retractions are, Brainard, Steneck and many of us would argue, a sign of “quality” in the research publishing process. They’re published when research publishers, using their traditional peer-reviewed publishing processes, curate (per their promise to the world) the research they publish to ensure it is a reliable as it can be. Retractions give us a sign that publishers are working with researchers when problems arise, either with integrity or with ethics (or with both), to address those problems in a robust and increasingly transparent way.

But they’re governed by our traditional peer-reviewed publishing process, and that is evolving, fast, and completely new peer review models are emerging.

Author-mediated peer review

“Author-mediated peer review” is one quite profound evolution, akin to discussions for many years about post-publication peer review [13]. Wellcome Open Research is a research publishing platform maintained by the Wellcome Trust. Authors submit their work to it, and after rapid quality checks and screening [14], including for our essential integrity and ethics qualities, the author’s research is published immediately. After publication the author is incentivized to get their work peer reviewed (for example, only work that is peer reviewed is then indexed in PubMed Central and Europe PubMed Central). They pick and invite the reviewers. If the author fails to get it peer reviewed (and positively peer reviewed) then it likely sits on the platform (and probably doesn’t ever get read). And then, further, the authors choose whether or not they address any points raised by the peer reviewers. Authors are totally in charge. This is a profound evolution: Publication, and then possible peer review. By doing this Wellcome Open Research (and others adopting this approach, like F1000 Research) have re-imagined the traditional processes we’re used to relying on to govern quality.

Community-mediated peer review

“Community-mediated peer review” takes things one step further into new territory. Right now, researchers can post their manuscripts to “preprint servers” where they can (almost immediately) create a permanent published, but not peer reviewed, record of their work. They might then choose to submit their work to a traditional journal, for peer-reviewed publication. A preprint server called arXiv is the world’s most established, and has been publishing actively for many years in physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics [15]. Many other preprint servers are emerging, often designed to serve research disciplines or sub-disciplines. A good example is bioRxiv, the preprint server for biology [16]. In general, preprint servers are seeing “hockey stick-like” growth in their use (albeit in relatively small absolute numbers compared with the say 200,000 peer reviewed journal articles that are published every month in traditional journals) [17]. Reputable preprint servers do check preprints before publishing them (bioRvix says “all articles undergo a basic screening process for offensive and/or non-scientific content and for material that might pose a health or biosecurity risk and are checked for plagiarism” [18]). They don’t do peer review. But they may enable the communities of researchers that use preprint servers to decide themselves to assemble, to peer review and offer comments on preprints, and thus again after publication to take care of quality – including integrity and ethics.

Publisher- and editor-mediated peer review

“Publisher- and editor-mediated” peer review, the traditional model that most journals use and that governs quality for most research articles, is not immune to changes. Look at the scale that some new journals are achieving (let’s take Nature Communications [19] and PLOS One [20] as examples of general journals publishing many thousands of articles per year; and Ecology and Evolution [21] and Cancer Medicine [22] as examples of specialist journals publishing many hundreds of articles per year). Each of these journals has achieved new kinds of scale measured by the numbers of articles they are peer reviewing and publishing. And each has updated its traditional editorial team and processes to handle that kind of scale. But each still uses a pretty traditional model for peer review, and governs integrity and ethics in the ways we’re used to.

We need to ask ourselves: Who looks after quality now?

So, with evolving and completely new peer review models, we do need to ask ourselves: Who looks after ethics, integrity – the most essential aspects of quality – now? And are we happy with how they’re doing it?

On January 16, 2019, in Berlin at the APE Conference [23], Dr. Elisa De Ranieri, Editor-in-Chief of Nature Communications, Dr. Diego Benedict Baptista, Open Research Coordinator, Wellcome Trust and Dr. John R. Inglis, Executive Director of Cold Spring Harbor Laboratory, Cold Spring Harbor (publishers of bioRxiv) told us how they do it, in a panel session moderated by Chris Graf, Director of, Research Integrity and Publishing Ethics at the research publisher Wiley (and Co-Chair, COPE, the Committee on Publication Ethics). If you couldn’t be there, check APE2019 and #acadAPE on Twitter and look for the official recordings on YouTube and the conference website [24].

About the author

Chris Graf is Director, Research Integrity and Publishing Ethics at Wiley, and is Co-Chair of COPE, Committee on Publication Ethics (an elected and voluntary position for which he will serve a 2-year term). Chris leads initiatives at Wiley that focus on transparency, research integrity and publishing ethics.

isu-39-isu180034-fx001.jpg

References

[1] 

https://www.the-scientist.com/news-opinion/german-institutions-and-wiley-reach-open-access-publishing-deal-65327.

[2] 

https://onlinelibrary.wiley.com/doi/full/10.1002/leap.1222.

[3] 

https://authorservices.wiley.com/Reviewers/journal-reviewers/what-is-peer-review/index.html.

[4] 

http://stm.sciencemag.org/content/8/341/341ps12.

[5] 

https://en.wikipedia.org/wiki/Declaration_of_Helsinki.

[6] 

https://www.nc3rs.org.uk/arrive-guidelines.

[7] 

https://publicationethics.org/authorship.

[8] 

https://publicationethics.org/files/retraction%20guidelines_0.pdf.

[9] 

https://retractionwatch.com/2017/03/27/authors-retract-honest-error-say-arent-penalized-result/.

[10] 

https://ori.hhs.gov/definition-misconduct.

[11] 

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0200303.

[12] 

http://www.sciencemag.org/news/2018/10/what-massive-database-retracted-papers-reveals-about-science-publishing-s-death-penalty.

[13] 

https://blogs.scientificamerican.com/information-culture/post-publication-peer-review-everything-changes-and-everything-stays-the-same/.

[14] 

https://wellcomeopenresearch.org/about.

[15] 

https://arxiv.org/.

[16] 

https://www.biorxiv.org/.

[17] 

http://www.prepubmed.org/monthly_stats/.

[18] 

https://www.biorxiv.org/about-biorxiv.

[19] 

https://www.nature.com/ncomms/about.

[20] 

https://journals.plos.org/plosone/s/journal-information.

[21] 

https://onlinelibrary.wiley.com/journal/20457758.

[22] 

https://onlinelibrary.wiley.com/journal/20457634.

[23] 

https://www.ape2019.eu/.

[24] 

https://youtu.be/6M5XTTgpxPI.