The role of the information community in ensuring that information is authoritative: Strategies from NISO Plus 2022
Abstract
This article reports on a NISO Plus 2022 session that addressed what can be done to safeguard the integrity of the scholarly content being created, disseminated, and used. How much responsibility does the information community have in ensuring that the content we provide is authoritative? Preprints are a great way to make early research results available, but it is not always clear that those results are not yet thoroughly vetted. Peer review – a key element of scholarly publication – can help, but is far from foolproof. Retractions are another important tool, but most retracted research is still all too readily available. What can and should we be doing to safeguard the integrity of the content being created, disseminated, and used?
1.Introduction
Ensuring that information is authoritative has been a challenge for researchers, publishers, librarians, students, and lay public since the very first scholarly publications. This issue is far from solved. Rapid sharing of results during the research process – from data sets to preprints with multiple versions and copies on the Internet – has not made it easier to ascertain what information is authoritative. Peer review, which has been held up as the gold standard for decades, is increasingly under pressure due to the speed and volume of research. And retracted articles can continue a vibrant zombie existence on the web. But in the midst of a pandemic and a climate crisis we need authoritative information more than ever before. We report on a NISO Plus 2022 session that addressed what can be done to safeguard the integrity of scholarly content being created, disseminated, and used.
Because authoritative research results should be verifiable and reproducible, data has been one focus. The first section, on crowdsourced data on the Zooniverse platform (see: https://www.zooniverse.org), addresses the impact of intentional design on data creation and management. Opening up data sets by encouraging good data curation practices, asking reviewers to check the data [1] and ensuring that data sets are FAIR [2] has long been a focus of the information community. Academic publishers can sometimes feel that data is the responsibility of research institutes. However, because the platforms used to create, structure, manage, share, store, analyze and tag research data are often created by or in cooperation with the information community, we have more say in the data management than we imagine – as well as more responsibility.
In the next section, we explore signals of trust in open preprint platforms. In the current ecosystem, authoritative information is still associated with measures of prestige such as the journal impact factor. However, in an open science context with thousands of preprints posted freely online every day, new measures and flags are required. Systems can be designed to reward best practices and ideally preprint platforms would highlight the most scientifically rigorous research. Both the Center for Open Science and ScienceOpen, represented here, along with other preprint platforms are testing and trialing methods to increase the trustworthiness of the research that they host as unvetted preprints. Identifying the qualities, signals, or flags of the most authoritative research will help in designing more responsible platforms for the future and will increase trust in an open science context.
Transparency and rigor are essential to creating an environment of trust. Journal publishers work on many fronts to ensure that the research they publish is authoritative. Each academic field has its own requirements, with peer review at the core. Peer review has come under intense scrutiny over the past few years as a wide range of research has proven not to be reproducible. In the third section, we argue that publishers can respond with increased transparency about the processes and biases embedded in the system. These signals of trust are essential for researchers building on the published literature towards the next breakthrough. And when the system fails, transparency is useful to understand what went wrong and how we can improve the system in the future.
Acknowledging failure is also a key component in trustworthy systems, as we demonstrate in the final case study on retraction. When bad data has entered the knowledge base – whether by accident or fraud, as a preprint or as an article in a prestigious journal – it is essential that it be clearly flagged in a timely fashion. But while this sounds straightforward, our decentralized network of knowledge infrastructure often does not pick up the information that a paper has been retracted, which itself can erode trust in academic research, as we discuss in the fourth section. With this journey from data set to preprint to publication and, hopefully not, retraction we explore the requirements and limitations of current best practices for ensuring that the information we allow to enter the global knowledgebase is authoritative and trustworthy.
2.Design as a tool for producing authoritative information in online crowdsourcing
Within the field of online crowdsourcing, authority has traditionally functioned as a barrier to trust; specifically, the presumed lack of authority around crowd-produced data as a reason not to trust it for authoritative purposes such as research, scholarly publication, etc. Within the past decade a broad range of successes from across research fields have proven that volunteers can produce results that are equal to data generated by experts [3]. Even with this strong record of accuracy, project teams must consider additional qualities for their data, like fidelity (the digital representation of an object along project guidelines, e.g. a diplomatic transcription, which preserves things like spelling errors in service of accurate preservation of the original text) and auditability (information about how the data was produced) [4]. Taking the time to include this information will bring context and history to the data and give potential users the opportunity to audit its origin as part of their vetting process.
This section will provide a perspective on questions of authority within the world of online crowdsourcing, and the responsibilities that practitioners (e.g. platform maintainers, or others who help to create the digital spaces in which this crowdsourcing work takes place) have to ensure that the data being produced is at a quality level that meets the expectations of the project stakeholders, including the volunteers who create the data and any potential users of the results. The case study for this section is Zooniverse [5], the world’s largest platform for crowdsourced research. Zooniverse supports researchers via its free, open-source Project Builder, a browser-based tool that allows anyone to create and run a crowdsourcing project, engage with volunteers, and export results. Since 2009, the Zooniverse has supported hundreds of teams around the world in creating and running projects, and currently has a volunteer community of more than 2.4 million registered participants.
Authority can also be a barrier to participants in online crowdsourcing projects, often in the form of hesitation or low confidence. On Zooniverse project message boards across disciplines, the most common concern from volunteers is, “What if I get it wrong?” It is the responsibility of project teams to help mitigate these feelings of uncertainty by clearly signposting any requirement of prerequisite knowledge, including when no background knowledge is required to participate in a given project.
Design solutions to crowdsourcing’s trust problems include research and development methods for creating tools that produce quality data, project workflow iteration based on user review and testing, and clear, consistent communication with volunteers. Rather than waiting until data is produced and implementing quality control or validation steps on the results, platform maintainers can design tools based on experience as well as through experimentation and testing [6,7].
Projects should be reviewed before launching publicly, both by platform maintainers and volunteers. Zooniverse projects are required to go through a process of internal review by the Zooniverse team, as well as beta review by the Zooniverse volunteer community before they can launch. Teams are required to take this feedback seriously and demonstrate how they edited their project based on the beta feedback they received. This process means project teams put in more effort up front (i.e., before a public launch), but it helps to ensure the usability and reliability of the data being created.
Finally, design solutions can mitigate volunteer hesitancy by supporting broad participation. Crowdsourcing assumes a large, public audience, therefore projects that are designed for holders of specialist knowledge cannot be true “crowdsourcing” projects, as any prerequisites also serve as barriers to participation. Additionally, teams should take care to avoid design choices that, even unintentionally, prioritize the ability of certain demographic groups to participate over others [6]. These considerations can at times be at odds with the desire to design for optimization and data quality, but it is possible to find a balance between the two [8].
3.Open science and signals of trust
Open science provides an opportunity to open up the research process with transparency and openness, and to catalyze an increased trust in science through verification, review, and evaluation of scholarship [9]. To shift research culture towards open practices, we must address the systems scientific research run on through a systems approach: systems of interdependent activity with infrastructure to make it possible and efficient to practice openness in researcher workflows, with communities to make open practices normative, with the incentives driving behaviors to make it rewarding, and with the policies requiring openness and transparency. The goals and incentives should be reconfigured, to start with the end in mind, and incentivizing not publication, but rather the quality and rigor of the research [10].
Scholarly publication has several models that encourage open science, including new models to advance rigor, transparency, and rapid dissemination. Registered Reports emphasize the importance of the research question and the quality of the methodology by approving a publication with peer review conducted prior to data collection [11]. High quality study designs are provisionally accepted for publication if the authors conduct the research study as specified. This aligns the incentive of publication with the goals of best practice and eliminates selective reporting of results, publication bias, and low statistical power [11]. Registered Reports facilitate publication of results regardless of the findings, increasing the dissemination of null findings and further ensuring shared knowledge of research, reducing the duplication of work, and extending the impact of research efforts.
Open Science Badges can be applied to existing and new models for scholarly publishing to offer incentives for researchers to share data, materials, or to preregister. Badges can be added to scholarly outputs to signal verified and persistent open data, open materials, or preregistration. This increased transparency of methods and data can increase the reproducibility and trust in scientific evidence and has shown to increase the rate of data sharing in publications [12]. Badges can offer another tool for aligning the incentives for doing more open and transparent research practices with the goals researchers have for meeting funder and journal requests for sharing their data and outputs.
A preprint is a manuscript describing research data and methods on a public server without having undergone peer review. An increasing number of preprints are being shared across servers, encompassing research from diverse disciplines and geographic locations [13]. Preprints are free to upload and make discoverable on preprint servers, and also free to discover and consume, making them a low/no cost, high reward action for researchers. On preprint servers, the steps between upload and discovery are also minimal, taking hours or days, compared to weeks to months before public consumption in open access article publishing models. Preprint sharing can align the goals of researchers to collaborate and share their work with the incentives to publish or perish because citations are higher for articles that have preprints [14].
Preprints enable rapid dissemination of scholarship; however, that speed comes at the cost of peer review, verification of claims, and evaluation of the rigor of the research process – a risk made more evident with the COVID-19 pandemic. Many preprints were shared as science happened in real time, while the entire globe watched and tried to make sense of the outcomes. Many were looking for signals of trust to know if the preprint could be used to make policy and societal decisions.
In a 2019 study of three thousand seven hundred and fifty-five researchers, open science information was found to be very important for assessing the credibility of preprints [15]. Respondents were asked questions in a survey related to their feelings towards and use of preprints, effect of icons on preprints on credibility, and the effect of service characteristics on credibility. The survey found that it was perceived to be extremely important to have links to materials, study data, preregistrations, and analysis scripts used in conducting the study. It was also found that information about independent reproductions, robustness check, and linked data and resources supporting the preprint would support more trust in the preprint. The disclosure of any conflicts of interests from the preprint authors would further ensure that risks were mitigated through transparency. Each of these signals of trust refer to open science best practices, which could also increase the rigor and reproducibility of the study results, reduce bias and other questionable research practices, and align incentives with the use of badges for practicing openness and transparency.
4.Providing information and building trust
Journal publishers and learned societies use their communications channels, workflows, best practice documents, tools, and platforms as well as collected and analyzed data to ensure the trustworthiness of the information they share. They do so throughout the publication cycle from journal information to their submission and peer review systems and processes as well as the published article page. In a sense scholarly publishing is all about information provided in a trusted manner. This is important because academics’ choice of publication venue (mostly journals) heavily relies on the journal’s audience, author guidelines and requirements, as well as the peer review process the journal applies.
According to a survey run jointly by Sense About Science and Elsevier in 2019, sixty-two percent of two thousand seven hundred and fifteen surveyed academics across the globe trust the majority of research output they encounter but a third doubt the quality of at least half of it. To compensate for this, they check supplementary material/data carefully, read only information associated with peer-reviewed journals or seek corroboration from other trusted sources [16]. Publishers and societies are working together to improve the trustworthiness of their published content through being more transparent about their editorial and peer review process. One example is the STM/NISO peer review terminology [17,18] that has been adopted in a pilot phase by several members.
Many journal home pages provide detailed information about the journal coverage, aims and scope, author guidelines, journal impact, reviewer instructions, peer review and publication speed, etc. Quite recently, many publishers have started to focus on inclusion and diversity to build on the trust by providing editorial member information that is connected to trusted sources such as ORCID, Scopus, PubMed, and Web of Science. Recently Elsevier journals started to indicate the gender and country diversity among the journal editorial board members, benchmarking it against the portfolio of journals covering the topic and more importantly against the gender diversity in that field through the journal home pages [19]. Since data accuracy plays a major role in the success of such initiatives, fifty-two publishing societies and organizations have come together to create a joint commitment for acting on inclusion and diversity in publishing [20]. This collaborative group has launched standardized questions for self-reported diversity data collection and minimum standards for inclusion and diversity for scholarly publishing [21].
Checklists and guidelines are another item journals employ to enhance the trustworthiness of the information they disseminate. For example, declarations of interest help article readers make their own judgments of potential bias: here, the corresponding author must disclose any potential competing or non-financial interests on behalf of all authors of the manuscript. For example, Elsevier’s Declaration Tool [22] quickly and easily helps authors follow the International Committee of Medical Journal Editors guidelines [23] inside Editorial Manager [24]. Journals look for ways to ensure that materials and methods information is properly provided by authors mostly through requiring them to use reporting guidelines such as STAR Methods [25], RRIDs [26], CONSORT [27], PRISMA [28], and SAGER [29]. This addresses the problem that a major cause of reproducibility problems is presentation of incorrect, incomplete, or even unavailable data and information.
Ensuring trustworthiness of the peer review process is another important focus for journal publishers and scholarly service providers. Tackling bias in peer review sits high on their list. We know that many of our societal structures are biased and often inequitable because humans are biased, and we built these systems. So, any data that attempts to describe these human-built systems or that arises from them will also reflect those biases and scholarly publishing is no different – the publishers’ job is to understand when that bias leads to unintentional harm.
Elsevier uses Scopus data to improve the relevancy of search engine recommendations to help editors to discover the most relevant potential reviewers. But the scholarly system was not designed for women or the minorities who were historically excluded from it. “Publish or perish” is a known pressure for those pursuing an academic career and clearly life events like parental career breaks or working part-time to juggle care commitments do not boost a researcher’s publication output. With the legacy of myths and biases such as that of gendered brains, it is little wonder that the characteristics of published researchers in our scholarly data do not reflect our global demographics.
To mitigate inherited bias, Elsevier worked on a project to embed fairness in our reviewer recommendation pipeline which needed engagement from every function of the business. Whilst our data scientists did a great deal of important work creating fairness metrics and benchmarking model performance across demographic groups, our efforts did not stop there. We made changes to the user interface to give more clarity to editors on the purpose and limitations of the tool. We also engaged with editors to support them in diversifying their reviewer pool.
We know biases exist (and have existed for many years) in the research ecosystem and naturally much of the data built on researcher performance reflects this. So not only do we have a responsibility to create awareness and transparency around what data our algorithms use and how, but there is a limit to how much we can “correct” this with just clever modeling of the biased data itself. We must ensure robust human oversight, governance, and appropriate accountability for these systems.
Another example to ensure research integrity is to provide clear, and harmonized retraction notices not only on the article pages, but also upon submission of a manuscript to authors who cite retracted articles in their reference list. Elsevier recently introduced a reference analysis report, which identifies citations in a submitted manuscript, lists them, and provides links to Scopus, Crossref, and PubMed if applicable. It clearly identifies whether any of the cited papers are retracted. Authors, editors, and reviewers can clearly see whether a retracted paper is cited in a manuscript and inquire whether it was necessary.
5.Safeguarding the integrity of the literature: The role of retraction
Prompt retraction is intended to minimize citation [30]. However, inappropriate, continued citation of retracted papers is common. For example, two COVID-19 articles that were retracted in June 2020 within a month after they were published have over twelve hundred citations each as of June 2022. Science magazine examined two hundred of the post-retraction citations to these papers and concluded that over half inappropriately cited the retracted articles [31]. Although the publisher websites show these articles as retracted, in some online systems, such as Google Scholar, information about the retraction status is hidden by default; to be more likely to see retracted items, users must click “Showing the best result for this search. See all results”.
Scientometrics has analyzed citations to retracted papers since at least 1990 [32–34]. Ongoing citation remains a problem, even when the retraction fundamentally invalidates the paper. After a retraction for falsified data, a human trial article continued to be positively cited eleven years after its October 2008 retraction. Of its post-retraction citations, in the years 2009–2019, ninety-six percent inappropriately cited the retracted article [35]. Finding the retraction notice was difficult. Among the eight databases tested, only one, EMBASE, had a working link to the retraction notice [35]. Such continued citation is not rare, according to a study in biomedicine, which concluded that ninety-four percent of post-retraction citations were inappropriate, considering a database-wide corpus of thirteen thousand post-retraction citations from PubMed Central articles citing retracted papers in PubMed [36].
The Alfred P. Sloan Foundation Reducing the Inadvertent Spread of Retracted Science project [37] conducted a stakeholder consultation and related activities, resulting in four recommendations [39,40]:
Develop a systematic cross-industry approach to ensure the public availability of consistent, standardized, interoperable, and timely information about retractions.
Recommend a taxonomy of retraction categories/classifications and corresponding retraction metadata that can be adopted by all stakeholders.
Develop best practices for coordinating the retraction process to enable timely, fair, and unbiased outcomes.
Educate stakeholders about publication correction processes, including retraction, and about pre- and post-publication stewardship of the scholarly record.
As a result of this project, addressing the first recommendation, NISO has chartered a working group to develop the Communication of Retractions, Removals, and Expressions of Concern Recommended Practice [41,42]. Its goal is to address the dissemination of retraction information (metadata & display) to support a consistent, timely transmission of that information to the reader (machine or human), directly or through citing publications, addressing requirements both of the retracted publication and of the retraction notice or expression of concern. The working group will not address the questions of what a retraction is, why an object is retracted, nor define terminology for retractions. The working group's expected outcomes include the following: additions to existing metadata to support awareness of retractions; best practices on population of retraction notices; outlining responsibilities of creators and consumers of retraction information; consistent of display labels, user experience, signaling; and a workflow definition for circulating information.
6.Conclusions
In recent years, many organizations, publishers, and learned societies have critically improved their workflows, products, services, and communication channels to ensure the trustworthiness of the content they provide. Yet these numerous activities are still scattered and siloed, resulting in opportunities for distrust across all phases of research. Lack of shared information around design choices leads to perpetuation of missteps, paywalled research continues to limit access to information, opaqueness around peer review can lead to bias in publication, and retracted articles still get cited for wrong reasons. In this paper, we have outlined practices the information community can take to mitigate these ongoing issues. It is our hope that by sharing these practices across the various sectors of our community, we can learn from one another and continue to raise both the standard of information, and, consequently, its trustworthiness.
About the Corresponding Author
Jodi Schneider is an associate professor at the School of Information Sciences, University of Illinois Urbana-Champaign (UIUC). She studies the science of science through the lens of arguments, evidence, and persuasion. She is developing Linked Data (ontologies, metadata, Semantic Web) approaches to manage scientific evidence. Schneider holds degrees in informatics (Ph.D., National University of Ireland, Galway), library & information science (M.S., UIUC), mathematics (M.A., University of Texas Austin), and liberal arts (B.A., Great Books, St. John’s College). She has worked as an actuarial analyst for a Fortune 500 insurance company, as the gift buyer for a small independent bookstore, and in academic science and web libraries. She has held research positions across the U.S. as well as in Ireland, England, France, and Chile. Email: jschneider@pobox.com and jodi@illinois.edu; Phone: 217-300-4328.
Acknowledgements
Thanks to the NISO Plus 2022 organizers for bringing this group together. Thanks to the team who proposed the CORREC Work Item. Thanks to Todd Carpenter and Nettie Lagace for information related to Communication of Retractions, Removals, and Expressions of Concern (CREC). Jodi Schneider acknowledges funding for Reducing the Inadvertent Spread of Retracted Science: Shaping a Research and Implementation Agenda under Alfred P. Sloan Foundation G-2020-12623.
Competing interests
Samantha Blickhan is Director of the Zooniverse team at the Adler Planetarium. Stephanie Dawson is CEO of ScienceOpen which runs a free discovery platform including a preprint server with open peer review and offers services to publishers within that context. Nici Pfeiffer is the Chief Product Officer at the Center for Open Science, who collaborated with many stakeholders and research communities on the advancement of the COS mission through initiatives such as Registered Reports, TOP, Open Science Badges, and preprints. Bahar Mehmani is an employee of Elsevier, the owner of Editorial Manager, Scopus, and Reviewer Recommender among other products and services. Jodi Schneider is a member of the NISO Communication of Retractions, Removals, and Expressions of Concern Recommended Practice Working Group, and since 2019, has received data-in-kind from Retraction Watch and scite; and usability testing compensation from the Institute of Electrical and Electronics Engineers.
References
[1] | J. Aleksic, A. Alexa, T.K. Attwood, N.C. Hong, M. Dahlö and R. Davey, An Open Science Peer Review Oath [version 2; peer review: 4 approved, 1 approved with reservations], F1000Research3 (2015), 271. doi:10.12688/f1000research.5686.2. |
[2] | M.D. Wilkinson, The FAIR Guiding Principles for scientific data management and stewardship, Scientific Data 3: ((2016) ), 160018 10.1038/sdata.2016.18. |
[3] | S. Blickhan, L. Trouille and C.J. Lintott.. Transforming research (and public engagement) through citizen science, Proceedings of the International Astronomical Union.(2018) ;518–523 . doi:10.1017/S174392131900526X. |
[4] | M. Ridge, S. Blickhan, M. Ferriter, A. Mast, B. Brumfield, B. Wilkins , 10. Working with crowdsourced data. In: The Collective Wisdom Handbook: Perspectives on Crowdsourcing in Cultural Heritage – Community Review Version [Internet]. 1st ed. 2021. Available from: https://britishlibrary.pubpub.org/pub/working-with-crowdsourced-data, accessed August 29, 2022. |
[5] | Zooniverse (Internet), cited 2022 May 24, available from https://www.zooniverse.org/, accessed August 29, 2022. |
[6] | H. Spiers, A. Swanson, L. Fortson, B. Simmons, L. Trouille, S. Blickhan , Everyone counts? Design considerations in online citizen science, J. Sci. Commun. 18: (1) ((2019) ), A04. doi:10.22323/2.18010204. |
[7] | S. Blickhan, C. Krawczyk, D. Hanson, A. Boyer, A. Simenstad, V. Hyning , Individual vs. Collaborative methods of crowdsourced transcription. J Data Min Digit Humanit. Special Issue on Collecting, Preserving, and Disseminating Endangered Cultural Heritage for New Understandings through Multilingual Approaches, 2019. doi:10.46298/jdmdh.5759. |
[8] | S. Blickhan, W. Granger, S.A. Noordin and B. Rother, Strangers in the landscape: On research development and making things for making. Startwords. (2021), 2. doi:10.5281/zenodo.5750679. |
[9] | UNESCO. UNESCO Recommendation on Open Science [Internet]. Paris: UNESCO; 2021 [cited 2022 May 24]. Report No.: SC-PCB-SPP/2021/OS/UROS. Available from: https://unesdoc.unesco.org/ark:/48223/pf0000379949.locale=en, accessed August 29, 2022. |
[10] | B.A. Nosek, J.R. Spies and M. Motyl, Scientific utopia: II. restructuring incentives and practices to promote truth over publishability, Perspect. Psychol. Sci. 7: (6) ((2012) ), 615–631. doi:10.1177/1745691612459058. |
[11] | B.A. Nosek and D. Lakens, Registered reports: A method to increase the credibility of published results, Soc. Pyschol. 45: (3) ((2014) ), 137–141. doi:10.1027/1864-9335/a000192. |
[12] | M.C. Kidwell, L.B. Lazarević, E. Baranski, T.E. Hardwicke, S. Piechowski, L.S. Falkenberg , Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency, PLOS Biol. 14: (5) ((2016) ), e1002456. doi:10.1371/journal.pbio.1002456. |
[13] | Preprints in Europe PMC [Internet]. Europe PMC. [cited 2022 May 25]. Available from: https://europepmc.org/Preprints, accessed August 29, 2022. |
[14] | N. Fraser, F. Momeni, P. Mayr and I. Peters, The relationship between bioRxiv preprints, citations and altmetrics, Quantitative Science Studies 1: (2) ((2020) ), 618–638. doi:10.1162/qss_a_00043. |
[15] | C.K. Soderberg, T.M. Errington and B.A. Nosek, Credibility of preprints: An interdisciplinary survey of researchers, R. Soc. Open Sci. 7: (10) ((2020) ), 201520. doi:10.1098/rsos.201520. |
[16] | Quality trust & peer review: researchers’ perspectives 10 years on [Internet]. Elsevier and Sense about Science; 2019 Sep [cited 2022 May 12]. Available from: https://senseaboutscience.org/wp-content/uploads/2019/09/Quality-trust-peer-review.pdf, accessed August 29, 2022. |
[17] | STM, the International Association of Scientific, Technical and Medical Publishers. A standard taxonomy for peer review [Internet]. OSF; 2020 [cited 2022 May 12]. Report No.: Version 2.1. Available from: https://osf.io/68rnz/, accessed August 29, 2022. |
[18] | NISO. Peer review terminology standardization [Internet]. [cited 2022 May 12]. Available from: https://www.niso.org/standards-committees/peer-review-terminology, accessed August 29, 2022. |
[19] | D. Francescon, Making progress towards a more inclusive research ecosystem: Elsevier’s Inclusion & Diversity Advisory Board report focuses on four areas for progress [Internet]. Elsevier Connect. 2022 [cited 2022 May 12]. Available from: https://www.elsevier.com/connect/inclusion-diversity-board-report, accessed August 29, 2022. |
[20] | Royal Society of Chemistry. Joint commitment for action on inclusion and diversity in publishing [Internet]. Royal Society of Chemistry. 2022 [cited 2022 May 12]. Available from: https://www.rsc.org/new-perspectives/talent/joint-commitment-for-action-inclusion-and-diversity-in-publishing/, accessed August 29, 2022. |
[21] | Royal Society of Chemistry. Minimum standards for inclusion and diversity for scholarly publishing [Internet]. Royal Society of Chemistry. [cited 2022 May 12]. Available from: https://www.rsc.org/new-perspectives/talent/minimum-standards-for-inclusion-and-diversity-for-scholarly-publishing/, accessed August 29, 2022. |
[22] | Elsevier. Video guide: using the declaration tool in Editorial Manager [Internet]. Elsevier Journal Article Publishing Support Center. 2022 [cited 2022 May 12]. Available from: https://service.elsevier.com/app/answers/detail/a_id/34594/supporthub/publishing/∼/video-guide%3A-using-the-declaration-tool-in-editorial-manager/, accessed August 29, 2022. |
[23] | International Committee of Medical Journal Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals [Internet]. 2019 [cited 2019 Dec 31]. Available from: http://www.icmje.org/icmje-recommendations.pdf, accessed August 29, 2022. |
[24] | Aries Systems Corporation. Editorial Manager [Internet]. Aries Systems Corporation. [cited 2022 May 18]. Available from: https://www.ariessys.com/software/editorial-manager/, accessed August 29, 2022. |
[25] | Cell Press. STAR Methods [Internet]. Cell Press. [cited 2022 May 12]. Available from: https://www.cell.com/star-methods, accessed August 29, 2022. |
[26] | A. Bandrowski, M. Brush, J.S. Grethe, M.A. Haendel, D.N. Kennedy, S. Hill , The Resource Identification Initiative: A cultural shift in publishing, F1000Research 4: ((2015) ), 134. doi:10.12688/f1000research.6555.2. |
[27] | K.F. Schulz, D.G. Altman, D. Moher, the CONSORT Group. CONSORT 2010 Statement: Updated guidelines for reporting parallel group randomised trials, BMC Med. 8: (1) ((2010) ), 18. doi:10.1186/1741-7015-8-18. |
[28] | M.J. Page, J.E. McKenzie, P.M. Bossuyt, I. Boutron, T.C. Hoffmann, C.D. Mulrow , The PRISMA 2020 statement: An updated guideline for reporting systematic reviews, Syst. Rev. 10: (1) ((2021) ), 89. doi:10.1186/s13643-021-01626-4. |
[29] | S. Heidari, T.F. Babor, P. De Castro, S. Tort and M. Curno, Sex and gender equity in research: Rationale for the SAGER guidelines and recommended use, Res. Integr. Peer Rev. 1: ((2016) ), 2. doi:10.1186/s41073-016-0007-6. |
[30] | COPE Council. Retraction guidelines, (2019). doi:10.24318/cope.2019.1.4. |
[31] | C. Piller, Many scientists citing two scandalous COVID-19 papers ignore their retractions, Science ((2021) ). doi:10.1126/science.abg5806. |
[32] | M.P. Pfeifer and G.L. Snodgrass, The continued use of retracted, invalid scientific literature, JAMA 263: (10) ((1990) ), 1420–1423. doi:10.1001/jama.1990.03440100140020. |
[33] | E. Garfield and A. Welljams-Dorof, The impact of fraudulent research on the scientific literature: The Stephen E. Breuning case, JAMA J. Am. Med. Assoc. 263: (10) ((1990) ), 1424–1426. doi:10.1001/jama.1990.03440100144021. |
[34] | N.D. Wright, A Citation Context Analysis of Retracted Scientific Articles [Ph.D.]. [College Park, Maryland]: University of Maryland; 1991. |
[35] | J. Schneider, D. Ye, A.M. Hill and A.S. Whitehorn, Continued post-retraction citation of a fraudulent clinical trial report, 11 years after it was retracted for falsifying data, Scientometrics 125: (3) ((2020) ), 2877–2913. doi:10.1007/s11192-020-03631-1. |
[36] | T.K. Hsiao and J. Schneider, Continued use of retracted papers: Temporal trends in citations and (lack of) awareness of retractions shown in citation contexts in biomedicine, Quant. Sci. Stud. 2: (4) ((2021) ), 1144–1169. doi:10.1162/qss_a_00155. |
[37] | The RISRS Team. Reducing the Inadvertent Spread of Retracted Science: Shaping a Research and Implementation Agenda [Internet]. [cited 2022 May 18]. Available from: https://infoqualitylab.org/projects/risrs2020/, accessed August 29, 2022. |
[38] | , N.D. Woods, J. Schneider and The RISRS Team. Addressing the continued circulation of retracted research as a design problem. GW J. Ethics Publ. [Internet]. 2022; 1 (1). Available from: https://gwpress.manifoldapp.org/read/retraction-as-a-design-problem/section/32a19ddd-381b-4a4c-aac9-bcbfd10c9c0a, accessed August 29, 2022. |
[39] | J. Schneider and The RISRS Team. Reducing the Inadvertent Spread of Retracted Science: Shaping a Research and Implementation Agenda [Internet]. 2021 [cited 2021 Mar 21]. Available from: https://infoqualitylab.org/projects/risrs2020/, accessed August 29, 2022. |
[40] | J. Schneider, N.D. Woods, R. Proescholdt, The RISRS Team. Reducing the Inadvertent Spread of Retracted Science: recommendations from the RISRS report, Research Integrity and Peer Review 9 19. 7: (1) ((2022) ). |
[41] | NISO. Work Item Title: Communication of Retractions, Removals, and Expressions of Concern [Internet]. NISO; 2021 Aug [cited 2022 Feb 20]. Available from: https://groups.niso.org/higherlogic/ws/public/download/25898, accessed August 29, 2022. |
[42] | National Information Standards Organization. NISO voting members approve work on Recommended Practice for retracted research [Internet]. 2021 [cited 2021 Nov 9]. Available from: https://www.niso.org/press-releases/2021/09/niso-voting-members-approve-work-recommended-practice-retracted-research, accessed August 29, 2022. |