Peer review in academic publishing is the process of reviewing new scholarship for relevance, accuracy, and importance to the field. Reviewers are typically volunteers who are experts in the topic at hand, and the label of “peer reviewed journal” has become a shorthand for quality assurance in academic publications. 

Traditional peer review models have largely focused on masking the identities of the reviewers and/or authors in order to mitigate bias. These models include the following: 

  • Single “blind” or anonymous: The names of the reviewers are not known to the authors, but the author identities are disclosed to the reviewers.
  • Dual or double anonymous: The names of the authors and reviewers are not disclosed to each other.
  • Triple anonymous: The names of the authors and reviewers are not disclosed to each other; the names of the editors and authors are not disclosed to each other.
  • Open or transparent: The names of the authors and reviewers are made known to each other, and sometimes made publicly available as well.
  • Collaborative/Community: The authors and reviewers are in dialogue with each other to refine the manuscript. Depending on the discipline, the community in which the research takes place is invited to review and offer feedback on the data and/or conclusions.
  • Editorial/editorial board: The editorial staff and/or the editorial board reviews the content for accuracy and relevance. The identities of the specific reviewers are unknown to the authors, and the identities of authors are sometimes anonymous.

Newer Developments in Peer Review 

The digital publishing environment has encouraged the development of more diverse and sophisticated tools for handling manuscripts through stages of review and consideration. Scholars looking for new ways to engage their colleagues at various points in the knowledge creation process have many possibilities:

  • Pre-publication (pre-print) peer review: Articles are submitted to a repository accessed by a community of peers who choose to offer feedback either through the platform or privately. Such papers, often called preprints, can later be submitted to a journal for a more traditional peer review and publishing consideration. In some disciplines, such as mathematics, physics, economics, and computer science, these early papers are of greater interest than a final published version, though the latter is often still critical for formal tenure and review processes. The growth of discipline-based repositories, including some that incorporate commenting tools that can be viewed as a form of peer review, have corresponded with the acceptance of preprints as citable information sources. Platforms facilitating preprint verification accelerated during the COVID-19 pandemic. Examples:
  • High volume peer review: “Mega-journals” that publish hundreds of articles each year focus, by necessity, on scale. In this model, a single editor, an editorial team, or an algorithmic tool manages the light review of a submission and potentially (but not necessarily) delegates a more detailed review to an independently managed community of reviewers. The goal of this mode of review is to determine the basic validity of the contribution and its appropriateness to the publication, not its uniqueness or impact. Examples:
  • Artificial intelligence (AI) and peer review: Publishers are frequently turning to algorithmic screening methods to assist with peer review. Tools are trained on a body of papers and reviews, then used to check items such as methods, images, statistics, and keywords. Examples:
  • Post-publication review: As noted previously, some journal platforms include commenting features that allow for dialogue on a published article. There has also been experimentation with online journal clubs through platforms like Publons, Journal Review (both now defunct), and PubPeer. Replication studies can be included in this category or post-publication verification, and interest has increased thanks to the work of Retraction Watch

Criticisms of Peer Review 

Despite its wide-spread use and promotion by the scholarly community, the peer review process has also been criticized on a number of fronts, including:

  • Conscious or unconscious bias by reviewers and/or editors
  • Slow turnaround by reviewers 
  • Reliance on unpaid and untrained labor 
  • Poor checks against malicious intent or misconduct
  • Introduction of algorithmic bias through AI tools 

While most scholars are aware of these issues, there remains trust in the basic premise that a review of scholarship by one’s peers is an effective way to ensure the quality of published research and scholarship. Publishers have been working to update their processes and training in order to address criticisms. 
More information on the inequities of peer review and potential mitigation strategies can be found on the page on Peer Review and Editorial Boards on this site’s guide to Diversity, Equity and Inclusion in Scholarly Communication.

Share