<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1641728616063202&amp;noscript=1&amp;ev=PixelInitialized">
ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Systematic Review

What is open peer review? A systematic review

[version 1; peer review: 1 approved, 3 approved with reservations]
PUBLISHED 27 Apr 2017
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the Research on Research, Policy & Culture gateway.

Abstract

Background: “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions. While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods.
Methods: Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of “open peer review” or “open review”, to create a corpus of 122 definitions. These definitions are then systematically analysed to build a coherent typology of the many different innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking.
Results: This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how  ambiguously the phrase “open peer review”  has been used thus far, for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature.
Conclusions: Based on this work, I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.

Keywords

open peer review, Open Science, scholarly communication, research evaluation, publishing,

Introduction

  “Open review and open peer review are new terms for evolving phenomena. They don’t have precise or technical definitions. No matter how they’re defined, there’s a large area of overlap between them. If there’s ever a difference, some kinds of open review accept evaluative comments from any readers, even anonymous readers, while other kinds try to limit evaluative comments to those from ”peers“ with expertise or credentials in the relevant field. But neither kind of review has a special name, and I think each could fairly be called “open review” or “open peer review”.” - Peter Suber, email correspondence, 20071.

As with other areas of “open science” (Pontika et al., 2015), “open peer review” (OPR) is a hot topic, with a rapidly growing literature that discusses it. Yet, as has been consistently noted (Ford, 2013; Hames, 2014; Ware, 2011), OPR has neither a standardized definition, nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions. While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods. The previous major attempt to resolve these elements systematically to provide a unified definition (Ford, 2013), discussed later, unfortunately ultimately confounds rather than resolves these issues.

In short, things have not improved much since Suber made his astute observation. This continuing imprecision grows more problematic over time, however. As Mark Ware notes, “it is not always clear in debates over the merits of OPR exactly what is being referred to” (Ware, 2011). Differing flavours of OPR include independent factors (open identities, open reports, open participation, etc.), which have no necessary connection to each other, and very different benefits and drawbacks. Evaluation of the efficacy of these differing variables and hence comparison between differing systems is therefore problematic. Discussions are potentially side-tracked when claims are made for the efficacy of “OPR” in general, despite critique usually being focussed on one element or distinct configuration of OPR. It could even be argued that this inability to define terms is to blame for the fact that, as Nicholas Kriegskorte has pointed out, “we have yet to develop a coherent shared vision for “open evaluation” (OE), and an OE movement comparable to the OA movement” (Kriegeskorte, 2012).

To resolve this, I undertake a systematic review of the definitions of “open peer review” or “open review”, to create a corpus of more than 120 definitions. These definitions have been systematically analysed to build a coherent typology of the many different innovations in peer review signified by the term, and hence provide the precise technical definition that is currently lacking. This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Based on this work, I propose a pragmatic definition of OPR as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.

Background

1. Problems with peer review

Peer review is the formal quality assurance mechanism whereby scholarly manuscripts (e.g. journal articles, books, grant applications and conference papers) are made subject to the scrutiny of others, whose feedback and judgements are then used to improve works and make final decisions regarding selection (for publication, grant allocation or speaking time). This system is perhaps more recent than one might expect, with its main formal elements only in general use since the mid-twentieth century in scientific publishing (Spier, 2002). Researchers agree that peer review per se is necessary, but most find the current model sub-optimal. Ware’s 2008 survey, for example, found that an overwhelming majority (85%) agreed that “peer review greatly helps scientific communication” and that even more (around 90%) said their own last published paper had been improved by peer review. Yet almost two thirds (64%) declared that they were satisfied with the current system of peer review, and less than a third (32%) believed that this system was the best possible (Ware, 2008). A recent follow-up study by the same author reported a slight increase in the desire for improvements in peer review (Ware, 2016)

Widespread beliefs that the current model is sub-optimal can be attributed to the various ways in which traditional peer review has been subject to criticism. Peer review has been variously accused of:

  • Unreliability and inconsistency: Reliant upon the vagaries of human judgement, the objectivity, reliability, and consistency of peer review are subject to question. Studies show reviewers’ views tend to show very weak levels of agreement (Kravitz et al., 2010; Mahoney, 1977), at levels only slightly better than chance (Herron, 2012; Smith, 2006). Studies suggest decisions on rejection or acceptance are similarly inconsistent. For example, Peters and Ceci’s classic study found that eight out of twelve papers were rejected for methodological flaws when resubmitted to the same journals in which they had already been published (Peters & Ceci, 1982). This inconsistency is mirrored in peer review’s inability to prevent errors and fraud from entering the scientific literature. Reviewers often fail to detect major methodological failings (Schroter et al., 2004), with eminent journals (whose higher rejection rates might suggest more stringent peer review processes) seeming to perform no better than others (Fang et al., 2012). Indeed, Fang and Casadevall found that the frequency of retraction is strongly correlated with the journal impact factor (Fang & Casadevall, 2011). Whatever the cause, recent sharp rises in the number of retracted scientific publications (Steen et al., 2013) testify that peer review sometimes fails in its role as the gatekeeper of science, allowing errors to enter the literature. Peer review’s other role, of filtering the best work into the best journals, also seems to fail. Many articles in top journals remain poorly cited, while many of the most highly-cited articles in their fields are published in lower-tier journals (Jubb, 2016).

  • Delay and expense: The period from submission to publication at many journals can often exceed one year, with much of this time taken up by peer review. This delay slows down the availability of results for further research and professional exploitation. The work undertaken in this period is also expensive, with the global costs of reviewers’ time estimated at £1.9bn in 2008 (Research Information Network [RIN], 2008), a figure which does not take into account the coordinating costs of publishers, or the time authors spend revising and resubmitting manuscripts (Jubb, 2016). These costs are greatly exacerbated by the current system in which peer review is managed by each journal, such that the same manuscript may be peer reviewed many times over as it is successively rejected and resubmitted until it finds acceptance.

  • Unaccountability and risks of subversion: The “black-box” nature of traditional peer review gives reviewers, editors and even authors a lot of power to potentially subvert the process. Lack of transparency means that editors can unilaterally reject submissions or shape review outcomes by selecting reviewers based on their known preference for or aversion to certain theories and methods (Travis & Collins, 1991). Reviewers, shielded by anonymity, may act unethically in their own interests by concealing conflicts of interest. Smith, an experienced editor, for example, reports reviewers stealing ideas and passing them off as their own, or intentional blocking or delaying publication of competitors’ ideas through harsh reviews (Smith, 2006). Equally, they may simply favour their friends and target their enemies. Authors, meanwhile, can manipulate the system by writing reviews of their own work via fake or stolen identities (Kaplan, 2015).

  • Social and publication biases: Although often idealized as impartial, objective assessors, in reality studies suggest that peer reviewers may be subject to social biases on the grounds of gender (Budden et al., 2008; Lloyd, 1990; Tregenza, 2002), nationality (Daniel, 1993; Ernst & Kienbacher, 1991; Link, 1998), institutional affiliation (Dall’Aglio, 2006; Gillespie et al., 1985; Peters & Ceci, 1982), language (Cronin, 2009; Ross et al., 2006; Tregenza, 2002) and discipline (Travis & Collins, 1991). Other studies suggest so-called “publication bias”, where prejudices against specific categories of works shape what is published. Publication bias can take many forms. First is a preference for complexity over simplicity in methodology (even if inappropriate, c.f. Travis & Collins, 1991) and language (Armstrong, 1997). Next, “confirmatory bias” is theorized to lead to conservatism, biasing reviewers against innovative methods or results contrary to dominant theoretical perspectives (Chubin & Hackett, 1990; Garcia et al., 2016; Mahoney, 1977). Finally, factors like the pursuit of “impact” and “excellence” (Moore et al., 2017) mean that editors and reviewers seem primed to prefer positive results over negative or neutral ones (Bardy, 1998; Dickersin et al., 1992; Fanelli, 2010; Ioannidis, 1998), and to disfavour replication studies (Campanario, 1998; Kerr et al., 1977).

    Lack of incentives: Traditional peer review provides little in the way of incentives for reviewers, whose work is almost exclusively unpaid and whose anonymous contributions cannot be recognised and hence rewarded (Armstrong, 1997; Ware, 2008).

  • Wastefulness: Reviewer comments often add context or point to areas for future work. Reviewer disagreements can expose areas of tension in a theory or argument. The behind-the-scenes discussions of reviewers and authors can also guide younger researchers in learning review processes. Readers may find such information helpful and yet at present, this potentially valuable additional information is wasted.

In response to these criticisms, a wide variety of changes to peer review have been suggested (see the extensive overview in Walker & Rocha da Silva, 2015). Amongst these innovations, many have been labelled as “open peer review” at one time or another.

2. The contested meaning of open peer review

The diversity of the definitions provided for open peer review can be seen by examining just two examples. The first one is, to my knowledge, the first recorded use of the phrase “open peer review”:

  “[A]n open reviewing system would be preferable. It would be more equitable and more efficient. Knowing that they would have to defend their views before their peers should provide referees with the motivation to do a good job. Also, as a side benefit, referees would be recognized for the work they had done (at least for those papers that were published). Open peer review would also improve communication. Referees and authors could discuss difficult issues to find ways to improve a paper, rather than dismissing it. Frequently, the review itself provides useful information. Should not these contributions be shared? Interested readers should have access to the reviews of the published papers.” (Armstrong, 1982)

  “[O]pen review makes submissions OA [open access], before or after some prepublication review, and invites community comments. Some open-review journals will use those comments to decide whether to accept the article for formal publication, and others will already have accepted the article and use the community comments to complement or carry forward the quality evaluation started by the journal. ” (Suber, 2012)

Within just these two examples, there are already a multitude of factors at play, including the removal of anonymity, the publishing of review reports, interaction between participants, crowdsourcing of reviews, and making manuscripts public pre-review, amongst others. But each of these are distinct factors, presenting separate strategies for openness and targeting differing problems. For example, disclosure of identities aims usually at increasing accountability and minimizing bias, c.f. “referees should be more highly motivated to do a competent and fair review if they may have to defend their views to the authors and if they will be identified with the published papers” (Armstrong, 1982). Publication of reports, on the other hand, also tackles problems of incentive (reviewers can get credit for their work) and wastefulness (reports can be consulted by readers). Moreover, these factors need not necessarily be linked, which is to say that they can be employed separately: identities can be disclosed without reports being published, and reports published with reviewer names withheld, for example.

This diversity has led many authors to acknowledge the essential ambiguity of the term “open peer review” (Hames, 2014; Sandewall, 2012; Ware, 2011). The major attempt thus far to bring coherence to this confusing landscape of competing and overlapping definitions, is Emily Ford’s paper “Defining and Characterizing Open Peer Review: A Review of the Literature” (Ford, 2013). Ford examined thirty-five articles to produce a schema of eight “common characteristics” of OPR: signed review, disclosed review, editor-mediated review, transparent review, crowdsourced review, prepublication review, synchronous review, and post-publication review. Unfortunately, however, Ford’s paper fails to offer a definitive definition of OPR, since despite distinguishing eight “common characteristics” of OPR, Ford nevertheless tries to reduce it to merely one: open identities: “Despite the differing definitions and implementations of open peer review discussed in the literature, its general treatment suggests that the process incorporates disclosure of authors’ and reviewers’ identities at some point during an article’s review and publication” (p. 314). Summing up her argument elsewhere, she says: “my previous definition … broadly understands OPR as any scholarly review mechanism providing disclosure of author and referee identities to one another” (Ford, 2015). But the other elements of her schema do not reduce to this one factor. Many definitions do not include open identities at all. This hence means that although Ford claims to have identified several features of OPR, she in fact is asserting that there is only one defining factor (open identity), which leaves us where we started. Ford’s schema is also problematic elsewhere: it lists “editor-mediated review” and “pre-publication review” as distinguishing characteristics, despite these being common traits of traditional peer review; it includes questionable elements such as the purely “theoretical” “synchronous review”; and some of its characteristics do not seem to be “base elements”, but complexes of other traits – for example, the definition of “transparent review” incorporates other characteristics such as open identities (which Ford terms “signed review”) and open reports (“disclosed review”).

Method: A systematic review of previous definitions

To resolve this ambiguity, OpenAIRE performed a review of the literature for articles discussing “open review” or “open peer review”, extracting a corpus of 122 definitions of OPR. I first searched Web of Science (WoS) for “TOPIC: (”open review" OR “open peer review”)”, with no limitation on date of publication, yielding a total of 137 results (searched on 12th July 2016). These records were then each individually examined for relevance and a total of 57 were excluded. 21 results (all BioMed Central publications) had been through an OPR process (which was mentioned in the abstract) but did not themselves touch on the subject of OPR; 12 results used the phrase “open review” to refer to a literature review with a flexible methodology; 12 results were for the review of objects classed “out of scope” (i.e. academic articles, books, conference submissions, data – examples included guidelines for clinical or therapeutic techniques, standardized terminologies, patent applications, and court judgements); 7 results were not in the English language; and 5 results were duplicate entries in WoS. This left a total of 80 relevant articles which mentioned either “open peer review” or “open review”. This set of articles was further enriched with 42 definitions from sources found through searching for the same terms in other academic databases (e.g., Google Scholar, JSTOR, disciplinary databases), Google (for blog articles) and Google Books (for books), as well as following citations in relevant bibliographies and literature reviews. The dataset is available online (Ross-Hellauer, 2017, http://doi.org/10.5281/zenodo.438024).

Each source was then individually examined for its definition of OPR. Where no explicit definition (e.g. “OPR is …”) was given, implicit definitions were gathered from contextual statements. For instance, “reviewers can notify the editors if they want to opt-out of the open review system and stay anonymous” (Janowicz & Hitzler, 2012) is taken to endorse a definition of OPR as incorporating open identities. In a few cases, sources defined OPR in relation to the systems of specific publishers (e.g., F1000Research, BioMed Central and Nature), and so were taken to implicitly endorse those systems as definitive of OPR.

Results

The number of definitions of OPR over time show a clear upward trend, with the most definitions in a single year coming in 2015. The distribution shows that except for some outlying definitions in the early 1980s, the phrase “open peer review” did not really enter academic discussion until the early 1990s. At that time, the phrase seems to have been used largely to refer to non-blinded review (i.e. open identities). We then see a big upswing from the early-mid 2000s onwards, which perhaps correlates with the rise of the rise of the openness agenda (especially open access, but also open data and open science more generally) over that period (Figure 1). Most of the definitions, 77.9% (n=95), come from peer-reviewed journal articles, with the second largest sources being books and blog posts. Other sources include letters to journals, news items, community reports and glossaries (Figure 2). As shown in Figure 3, the majority of definitions (51.6%) were identified to be primarily concerned with peer-review of STEM-subject material, while 10.7% targeted material from Social Sciences and Humanities material. The remainder (37.7%) were interdisciplinary. Meanwhile, regarding the target of the OPR mentioned in these articles (Figure 4), most were referring to peer review of journal articles (80.7%), with 16% not specifying a target (16%), and a small number of articles also referring to review of data, conference papers and grant proposals.

703ec170-1aad-40e3-b5ee-7dae130261e2_figure1.gif

Figure 1. Definitions of OPR in the literature by year.

703ec170-1aad-40e3-b5ee-7dae130261e2_figure2.gif

Figure 2. Breakdown of OPR definitions by source.

703ec170-1aad-40e3-b5ee-7dae130261e2_figure3.gif

Figure 3. Breakdown of OPR definitions by disciplinary scope.

703ec170-1aad-40e3-b5ee-7dae130261e2_figure4.gif

Figure 4. Breakdown of OPR definitions by type of material being reviewed.

Of the 122 definitions identified, 68% (n=83) were explicitly stated, 37.7% (n=46) implicitly stated, and 5.7% (n=7) contained both explicit and implicit information.

The extracted definitions were examined and classified against an iteratively constructed taxonomy of OPR traits. Nickerson et al. (2013) advise that the development of a taxonomy should begin by identifying the appropriate meta-characteristic – in this case distinct individual innovations to the traditional peer review system. An iterative approach then followed, in which dimensions given in the literature were applied to the corpus of definitions and gaps/overlaps in the OPR taxonomy identified. Based on this, new traits or distinctions were introduced so that in the end, a schema of seven OPR traits was produced (defined below):

  • Open identities

  • Open reports

  • Open participation

  • Open pre-review manuscripts

  • Open final-version commenting

  • Open interaction

  • Open platforms

The core traits are easily identified, with just three covering more than 99% of all definitions: Open identities combined with open reports cover 116 (95.1%) of all records. Adding open participations leads to a coverage of 121 (99.2%) records overall. As seen in Figure 5, open identities is by far the most prevalent trait, present in 90.1% (n=110) of definitions. Open reports is also present in the majority of definitions (59.0%, n=72), while open participation is part of around a third. Open pre-review manuscripts (23.8%, n=29) and open interaction (20.5%, n=25) are also a fairly prevalent part of definitions. The outliers are open final version commenting (4.9%) and open platforms (1.6%).

703ec170-1aad-40e3-b5ee-7dae130261e2_figure5.gif

Figure 5. Distribution of OPR traits amongst definitions.

The various ways these traits are configured within definitions can be seen in Figure 6. Quantifying definitions in this way allows us to accurately portray exactly how ambiguously the phrase “open peer review” has been used thus far, for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature.

703ec170-1aad-40e3-b5ee-7dae130261e2_figure6.gif

Figure 6. Unique configurations of OPR traits within definitions.

A “power law” distribution can be observed in the distribution of these traits, with the most popular configuration (open identities) accounting for one third (33.6%, n=41) and the second-most popular configuration (open identities, open reports) accounting for almost a quarter (23.8%, n=29) of all definitions. There then follows a “long-tail” of less-frequently found configurations, with more than half of all configurations being unique to a single definition.

Discussion: The traits of open peer review

I next offer a detailed analysis of each of these traits, detailing the issues they aim to resolve and the evidence to support their effectiveness.

Open identities

Open identity peer review, also known as signed peer review (Ford, 2013; Nobarany & Booth, 2015) and “unblinded review” (Monsen & Horn, 2007), is review where authors and reviewers are aware of each other’s identities. Traditional peer review operates as either “single-blind”, where authors do not know reviewers’ identities, or “double-blind”, where both authors and reviewers remain anonymous. Double-blind reviewing is more common in the Arts, Humanities and Social Sciences than it is in STEM (science, technology, engineering and medicine) subjects, but in all areas single-blind review is by far the most common model (Walker & Rocha da Silva, 2015). A main reason for maintaining author anonymity is that it is assumed to tackle possible publication biases against authors with traditionally feminine names, from less prestigious institutions or non-English speaking regions (Budden et al., 2008; Ross et al., 2006). Reviewer anonymity, meanwhile, is presumed to protect reviewers from undue influence, allowing them to give candid feedback without fear of possible reprisals from aggrieved authors. Various studies have failed to show that such measures increase review quality, however (Fisher et al., 1994; Godlee et al., 1998; Justice et al., 1998; McNutt et al., 1990; van Rooyen et al., 1999). As Godlee and her colleagues have said, “Neither blinding reviewers to the authors and origin of the paper nor requiring them to sign their reports had any effect on rate of detection of errors. Such measures are unlikely to improve the quality of peer review reports” (Godlee et al., 1998). Moreover, factors such as close disciplinary communities and internet search capabilities, mean that author anonymity is only partially effective, with reviewers shown to be able to identify authors in between 26 and 46 percent of cases (Fisher et al., 1994; Godlee et al., 1998).

Proponents of open identity peer review argue that it will enhance accountability, further enable credit for peer reviewers, and simply make the system fairer: “most importantly, it seems unjust that authors should be “judged” by reviewers hiding behind anonymity” (van Rooyen et al., 1999). Open identity peer review is argued, moreover, to potentially increase review quality, as it is theorised that reviewers will be more highly motivated and invest more care in their reviews if their names are attached to them. Opponents counter this by arguing that signing will lead to poorer reviews, as reviewers temper their true opinions to avoid causing offence. To date, studies have failed to show any great effect in either direction (McNutt et al., 1990; van Rooyen et al., 1999; van Rooyen et al., 2010). However, since these studies derive from only one disciplinary area (medicine), the results cannot taken as representative and hence further research is undoubtedly required.

Open reports

Open reports peer review is where review reports (either full reports or summaries) are published alongside the relevant article. The main benefits of this measure is in making currently invisible but potentially useful scholarly information available for re-use. There is increased transparency and accountability that comes with being able to examine normally behind-the-scenes discussions and processes of improvement and assessment, and a potential to further incentivize peer reviewers by making their peer review work a more visible part of their scholarly activities (thus enabling reputational credit).

Reviewing is hard work. Research Information Network reported in 2008 that a single peer review takes an average of four hours, at an estimated total annual global cost of around £1.9 billion (Research Information Network, 2008). Once an article is published, however, these reviews usually serve no further purpose than to reside in publisher’s long-term archives. Yet those reviews contain information that remains potentially relevant and useful in the here-and-now. Often, works are accepted despite the lingering reservations of reviewers. Published reports can enable readers to consider these criticisms themselves, and “have a chance to examine and appraise this process of ”creative disagreement" and form their own opinions” (Peters & Ceci, 1982). Making reviews public in this way also adds another layer of quality assurance, as the reviews are open to the scrutiny of the wider scientific community. Moreover, publishing reports also aims at raising the recognition and reward of the work of peer reviewers. Adding review activities to the reviewer’s professional record is common practice; author identification systems currently also add mechanisms to host such information (e.g. via ORCID) (Hanson et al., 2016). Finally, open reports give young researchers a guide (to tone, length, the formulation of criticisms) to help them as they begin to do peer review themselves.

The evidence-base against which to judge such arguments is not great enough to enable strong conclusions, however. Van Rooyen and her colleagues found that open reports correlate with higher refusal rates amongst potential reviewers, as well as an increase in time taken to write review but no concomitant effect on review quality (van Rooyen et al., 2010). Nicholson and Alperin’s small survey, however, found generally positive attitudes: “researchers … believe that open review would generally improve reviews, and that peer reviews should count for career advancement” (Nicholson & Alperin, 2016).

Open participation

Open participation peer review, also known as “crowdsourced peer review” (Ford, 2013; Ford, 2015), “community/public review” (Walker & Rocha da Silva, 2015) and “public peer review” (Bornmann et al., 2012), allows the wider community to contribute to the review process. Whereas in traditional peer review editors identify and invite specific parties (peers) to review, open participation processes invite interested members of the scholarly community to participate in the review process, either by contributing full, structured reviews or shorter comments. It may be that comments are open to anybody (anonymous or registered), or some credentials might first be required (e.g., Science Open requires an ORCID profile with at least five published articles). Open participation is often used as a complement to a parallel process of solicited peer review. It aims to resolve possible conflicts associated with editorial selection of reviewers (e.g. biases, closed-networks, elitism) and possibly improve the reliability of peer review by increasing the number of reviewers (Bornmann et al., 2012). Reviewers can come from the wider research community, as well as those traditionally under-represented in scientific assessment, including representatives from industry or members of special-interest groups, for example patients in the case of medical journals (Ware, 2011). This has the potential to open the pool of reviewers beyond those identified by editors to include all potentially interested reviewers (including those from outside academia), and hence increase the number of reviewers for each publication (though in practice this is unlikely). Evidence suggests this practice could help increase the accuracy of peer review. For example, Herron (2012) produced a mathematical model of the peer review process which showed that “the accuracy of public reader-reviewers can surpass that of a small group of expert reviewers if the group of public reviewers is of sufficient size”, although only if the numbers of reader-reviewers exceeded 50.

Criticisms of open participation routinely focus on questions about reviewers’ qualifications to comment and the incentives for doing so. As Stevan Harnad has said: “it is not clear whether the self-appointed commentators will be qualified specialists (or how that is to be ascertained). The expert population in any given speciality is a scarce resource, already overharvested by classical peer review, so one wonders who would have the time or inclination to add journeyman commentary services to this load on their own initiative” (Harnad, 2000). Moreover, difficulties in motivating self-selecting commentators to take part and deliver useful critique have been reported. Nature, for example, ran an experiment from June to December 2006 inviting submitting authors to take part in an experiment where open participation would be used as a complement to a parallel process of solicited peer reviews. Nature judged the trial to have been unsuccessful due to the small number of authors wishing to take part (just 5% of submitting authors), the small number of overall comments (almost half of articles received no comments) and the insubstantial nature of most of the comments that were received (Fitzpatrick, 2011). At the open access journal Atmospheric Chemistry and Physics (ACP), which publishes pre-review discussion papers for community comments, only about one in five papers is commented upon (Pöschl, 2012). Bornmann et al. (2012) conducted a comparative content analysis of the ACP’s community comments and formal referee reviews and concluded that the latter – tending to focus more on formal qualities, conclusions and potential impact – better supported the selection and improvement of manuscripts. This all suggests that although open participation might be a worthwhile complement to traditional, invited peer review, it is unlikely to be able to fully replace it.

Open interaction

Open interaction peer review allows and encourages direct reciprocal discussion between reviewers, and/or between author(s) and reviewers. In traditional peer review, reviewers and authors correspond only with editors. Reviewers have no contact with other reviewers, and authors usually have no opportunity to directly question or respond to reviewers’ comments. Allowing interaction amongst reviewers or between authors and reviewers, or between reviewers themselves, is another way to “open up” the review process, enabling editors and reviewers to work with authors to improve their manuscript. The motivation for doing so, according to (Armstrong, 1982), is to “improve communication. Referees and authors could discuss difficult issues to find ways to improve a paper, rather than dismissing it”.

Some journals enable pre-publication interaction between reviewers as standard (Hames, 2014). The EMBO Journal, for example, enables “cross-peer review,” where referees are “invited to comment on each other’s reports, before the editor makes a decision, ensuring a balanced review process” (EMBO Journal, 2016). At eLife, reviewers and editor engage in an “online consultation session” where they come to a mutual decision before the editor compiles a single peer review summary letter for the author to give them a single, non-contradictory roadmap for revisions (Schekman et al., 2013). The publisher Frontiers has gone a step further, including an interactive collaboration stage that “unites authors, reviewers and the Associate Editor – and if need be the Specialty Chief Editor – in a direct online dialogue, enabling quick iterations and facilitating consensus” (Frontiers, 2016).

Perhaps even more so than other areas studied here, evidence to judge the effectiveness of interactive review is scarce. Based on anecdotal evidence, Walker & Rocha da Silva (2015) advise that “[r]eports from participants are generally but not universally positive”. To the knowledge of the author, the only experimental study that has specifically examined interaction among reviewers or between reviewers and authors is that of Jeffrey Leek and his colleagues, who performed a laboratory study of open and closed peer review based on an online game and found that “improved cooperation does in fact lead to improved reviewing accuracy. These results suggest that in this era of increasing competition for publication and grants, cooperation is vital for accurate evaluation of scientific research” (Leek et al., 2011). Such results are encouraging, but hardly conclusive. Hence, there remains much scope for further research to determine the impact of cooperation on the efficacy and cost of the review process.

Open pre-review manuscripts

Open pre-review manuscripts are manuscripts that are immediately openly accessible (via the internet) in advance, or in synchrony with, any formal peer review procedures. Subject-specific “preprint servers” like arXiv.org and bioRxiv.org, institutional repositories, catch-all repositories like Zenodo or Figshare and some publisher-hosted repositories (like PeerJ Preprints) allow authors to short-cut the traditional publication process and make their manuscripts immediately available to everyone. This can be used as a complement to a more traditional publication process, with comments invited on preprints and then incorporated into redrafting as the manuscript goes through traditional peer review with a journal. Alternatively, services which overlay peer-review functionalities on repositories can produce functional publication platforms at reduced cost (Boldt, 2011; Perakakis et al., 2010). The mathematics journal Discrete Analysis, for example, is an overlay journal whose primary content is hosted on arXiv (Day, 2015). The recently released Open Peer Review Module for repositories, developed by Open Scholar in association with OpenAIRE, is an open source software plug-in which adds overlay peer review functionalities to repositories using the DSpace software (OpenAIRE, 2016). Another innovative model along these lines is that of ScienceOpen, which ingests articles metadata from preprint servers and contextualizes them by adding altmetrics and other relational information, before offering authors peer review.

In other cases, manuscripts are submitted to publishers in the usual way but made immediately available online (usually following some rapid preliminary review or “sanity check”) before the start of the peer review process. This approach was pioneered with the 1997 launch of the online journal Electronic Transactions in Artificial Intelligence (ETAI), where a two-stage review process was used. First, manuscripts were made available online for interactive community discussion, before later being subject to standard anonymous peer review. The journal stopped publishing in 2002 (Sandewall, 2012). Atmospheric Chemistry and Physics uses a similar system of multi-stage peer review, with manuscripts being made immediately available as “discussion papers” for community comments and peer review (Pöschl, 2012). Other prominent examples are F1000Research and the Semantic Web Journal.

The benefits to be gained from open pre-review manuscripts is that researchers can assert their priority in reporting findings – they needn’t wait for the sometimes seemingly endless peer review and publishing process, during which they might fear being scooped. Moreover, getting research out earlier increases its visibility, enables open participation in peer review (where commentary is open to all), and perhaps even, according to (Pöschl, 2012), increases the quality of initial manuscript submissions.

Open final-version commenting

Open final-version commenting is review or commenting on final “version of record” publications. If the purpose of peer review is to assist in the selection and improvement of manuscripts for publication, then it seems illogical to suggest that peer review can continue once the final version-of-record is made public. Nonetheless, in a literal sense, even the declared fixed version-of-record continues to undergo a process of improvement (occasionally) and selection (perpetually).

The internet has hugely expanded the range of effective action available for readers to offer their feedback on scholarly works. Where before only formal routes like the letters to the journal or commentary articles offered readers a voice, now a multitude of channels exist. Journals are increasingly offering their own commentary sections. Walker & Rocha da Silva (2015) found that of 53 publishing venues reviewed, 24 provided facilities to enable user-comments on published articles – although these were typically not heavily used. Researchers seem to see the worth of such functionalities, with almost half of respondents to a 2009 survey believing supplementing peer review with some form of post-publication commentary to be beneficial (Mulligan et al., 2013). But users can “publish” their thoughts anywhere on the Web – via academic social networks like Mendeley, ResearchGate and Academia.edu, via Twitter, or on their own blogs. The reputation of a piece of work is continuously evolving as long as it remains the subject of discussion.

Improvements based on feedback happen most obviously in the case of so-called ‘living’ publications, like the Living Reviews group of three disciplinary journals in the fields of relativity, solar physics and computational astrophysics, publishing invited review articles which allow authors to regularly update their articles to incorporate the latest developments in the field. Here, even where the published version is anticipated to be the final version, it remains open to future retraction or correction. Such changes are often fueled by social media, as in the 2010 case of #arseniclife, where social media critique over flaws in the methodology of a paper claiming to show a bacterium capable of growing on arsenic resulted in refutations being published in Science. The Retraction Watch blog is dedicated to publicizing such cases.

A major influence here has been the independent platform Pubpeer which proclaims itself a “post-publication peer review platform”. When its users swarmed to critique a Nature paper on STAP (Stimulus-Triggered Acquisition of Pluripotency) cells, PubPeer argued that its “post-publication peer review easily outperformed even the most careful reviewing in the best journal. The papers’ comment threads on PubPeer have attracted some 40000 viewers. It’s hardly surprising they caught issues that three overworked referees and a couple of editors did not. Science is now able to self-correct instantly. Post-publication peer review is here to stay” (PubPeer, 2014).

Open platforms

Open platforms peer review is review facilitated by a different organizational entity than the venue of publication. Recent years have seen the emergence of a group of dedicated platforms which aim to augment the traditional publishing ecosystem by de-coupling review functionalities from journals. Services like RUBRIQ and Peerage of Science offer “portable” or “independent” peer review. A similar service, Axios Review, operated from 2013 to 2017. Each platform invites authors to submit manuscripts directly to them, then organises review amongst their own community of reviewers and returns review reports. In the case of RUBRIQ and Peerage of Science, participating journals then have access to these scores and manuscripts and so can contact authors with a publishing offer or to suggest submission. Axios meanwhile, directly forwarded the manuscript, along with reviews and reviewer identities, to the author’s preferred target journal. The models vary in their details – RUBRIQ, for example, pays its reviewers, whereas Axios operated on a community model where reviewers earned discounts on having their own work reviewed – but all aim in their ways to reduce inefficiencies in the publication process, especially the problem of duplication of effort. Whereas in traditional peer review, a manuscript could undergo peer review at several journals, as it is submitted and rejected, then submitted elsewhere, such services need just one set of reviews which can be carried over to multiple journals until a manuscript finds a home (hence “portable” review).

Other decoupled platforms aim at solving different problems. Publons seeks to address the problem of incentive in peer review by turning peer review into measurable research outputs. Publons collects information about peer review from reviewers and publishers to produce reviewer profiles which detail verified peer review contributions that researchers can add to their CVs. Overlay journals like Discrete Mathematics, discussed above, are another example of open platforms. Peter Suber (quoted in Cassella & Calvi, 2010) defines the overlay journal as “An open-access journal that takes submissions from the preprints deposited at an archive (perhaps at the author’s initiative), and subjects them to peer review…. Because an overlay journal doesn’t have its own apparatus for disseminating accepted papers, but uses the pre-existing system of interoperable archives, it is a minimalist journal that only performs peer review.” Finally, there are the many venues through which readers can now comment on already-published works (see also “open final-version commenting” above), including blogs and social networking sites, as well as dedicated platforms such as PubPeer.

Conclusion: A unified definition of open peer review

We have seen that the definition of “open peer review” is contested ground. Our aim here has been to provide some clarity as to what is being referred to when this term is used. By analyzing 122 separate definitions from the literature I have identified seven different traits of OPR which all aim to resolve differing peer review problems. Amongst the corpus of definitions there are 22 unique configurations of these traits –so 22 distinct definitions of OPR in the literature. Given this is such a contested concept, in my view the only sensible way forward is to acknowledge the ambiguity of this term, accepting that it is used as an umbrella concept for a diverse array of peer review innovations.

The theme that unifies these diverse traits is Open Science. Factors like opening identities, reports and participation all bespeak the ethos of Open Science in trying, in their differing and overlapping ways, to bring greater transparency, accountability, inclusivity and flexibility to the restricted traditional model of peer review.

Based upon this analysis I offer the following unified definition:

OPR definition: Open peer review is an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process. The full list of traits is:

  • Open identities: Authors and reviewers are aware of each other’s identity

  • Open reports: Review reports are published alongside the relevant article.

  • Open participation: The wider community to able to contribute to the review process.

  • Open interaction: Direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged.

  • Open pre-review manuscripts: Manuscripts are made immediately available (e.g., via pre-print servers like arXiv) in advance of any formal peer review procedures.

  • Open final-version commenting: Review or commenting on final “version of record” publications.

  • Open platforms: Review is de-coupled from publishing in that it is facilitated by a different organizational entity than the venue of publication.

Data availability

Dataset including full data files used for analysis in this review: http://doi.org/10.5281/zenodo.438024 (Ross-Hellauer, 2017).

Notes

1The provenance of this quote is uncertain, even to Suber himself, who recently advised in a personal correspondence (19.8.2016): “I might have said it in an email (as noted). But I can’t confirm that, since all my emails from before 2009 are on an old computer in a different city. It sounds like something I could have said in 2007. If you want to use it and attribute it to me, please feel free to note my own uncertainty!”

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 27 Apr 2017
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Ross-Hellauer T. What is open peer review? A systematic review [version 1; peer review: 1 approved, 3 approved with reservations] F1000Research 2017, 6:588 (https://doi.org/10.12688/f1000research.11369.1)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 27 Apr 2017
Views
350
Cite
Reviewer Report 22 May 2017
Emily Ford, Urban & Public Affairs Librarian, Portland State University, Portland, OR, USA 
Approved with Reservations
VIEWS 350
Introduction
  • The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you are using of open science fully accepted and
... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Ford E. Reviewer Report For: What is open peer review? A systematic review [version 1; peer review: 1 approved, 3 approved with reservations]. F1000Research 2017, 6:588 (https://doi.org/10.5256/f1000research.12273.r22576)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Emily Ford: “Introduction: The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Emily Ford: “Introduction: The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you ... Continue reading
Views
222
Cite
Reviewer Report 15 May 2017
Bahar Mehmani, Global Publishing Development department, Elsevier, RELX Group, Amsterdam, The Netherlands 
Approved with Reservations
VIEWS 222
Tony provides an overview of different definitions of Open Peer Review, acknowledging the ambiguity of the term “open peer review” and the probable impact of such ambiguity on evaluation of the efficiency of open peer review. The author has created ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Mehmani B. Reviewer Report For: What is open peer review? A systematic review [version 1; peer review: 1 approved, 3 approved with reservations]. F1000Research 2017, 6:588 (https://doi.org/10.5256/f1000research.12273.r22575)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Bahar Mehmani: „Tony provides an overview of different definitions of Open Peer Review, acknowledging the ambiguity of the term “open peer review” and the probable impact of such ambiguity on ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Bahar Mehmani: „Tony provides an overview of different definitions of Open Peer Review, acknowledging the ambiguity of the term “open peer review” and the probable impact of such ambiguity on ... Continue reading
Views
470
Cite
Reviewer Report 11 May 2017
Theodora Bloom, The BMJ, London, UK 
Approved with Reservations
VIEWS 470
This is an interesting paper addressing a question that is important to journal editors and publishers as well as the wider ‘open science’ community, namely what is meant by open peer review. I have three significant concerns that need to ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Bloom T. Reviewer Report For: What is open peer review? A systematic review [version 1; peer review: 1 approved, 3 approved with reservations]. F1000Research 2017, 6:588 (https://doi.org/10.5256/f1000research.12273.r22301)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Theodora Bloom: “This is an interesting paper addressing a question that is important to journal editors and publishers as well as the wider ‘open science’ community, namely what is meant ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Theodora Bloom: “This is an interesting paper addressing a question that is important to journal editors and publishers as well as the wider ‘open science’ community, namely what is meant ... Continue reading
Views
314
Cite
Reviewer Report 08 May 2017
Richard Walker, Blue Brain Project, Swiss Federal Institute of Technology in Lausanne, Geneva, Switzerland 
Approved
VIEWS 314
General

This is a useful, well-written article that helps to clarify some of the “fuzziness” concerning the concept of “Open Peer Review”.
 
The author makes a systematic search of the literature, fully and correctly ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Walker R. Reviewer Report For: What is open peer review? A systematic review [version 1; peer review: 1 approved, 3 approved with reservations]. F1000Research 2017, 6:588 (https://doi.org/10.5256/f1000research.12273.r22299)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Richard Walker: “This is a useful, well-written article that helps to clarify some of the “fuzziness” concerning the concept of “Open Peer Review”. The author makes a systematic search of ... Continue reading
COMMENTS ON THIS REPORT
  • Author Response 01 Sep 2017
    Tony Ross-Hellauer, OpenAIRE / Uni. Goettingen, Germany
    01 Sep 2017
    Author Response
    Richard Walker: “This is a useful, well-written article that helps to clarify some of the “fuzziness” concerning the concept of “Open Peer Review”. The author makes a systematic search of ... Continue reading

Comments on this article Comments (0)

Version 2
VERSION 2 PUBLISHED 27 Apr 2017
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.