Senior Partner, Origin Editorial
LinkedIn: Jason Roberts
Editor’s Note: Today’s post is the first in series of posts on reviewer training. Subsequent posts will examine journal-led efforts already undertaken to train reviewers and offer an implementation plan for providing reviewer training regardless of the size or subject matter of a given journal.
* Peer review is an essential part of scholarly communications but is entirely volunteer driven.
* Journals are struggling to find motivated peer reviewers.
* Reviewer training is sporadic and delivered inconsistently or not at all.
* Could the offer of reviewer training simultaneously help motivate commitments to review for
given journals as well as improve quality?
* Training represents a way for journals to engage with the community they serve.
Journals are struggling to secure peer reviewers. The process is slow, cumbersome, and inconsistent in terms of delivering on the promise of guaranteeing validation and ensuring published research is of a sufficient quality to enhance our understanding on a given topic and allow for reproducibility. But the typical journal management of reviewers is entirely transactional: reviewer is asked to evaluate a manuscript, reviewer provides comments, everyone moves on. With obvious motivations to review diminishing in the eyes of some reviewers, should journals do more to give back and reward the hard work of their reviewers? One approach might be for journals to offer training, especially for early-career researchers. After all, learning how to evaluate a manuscript develops eminently transferable skills that can, in turn, improve authoring and study design proficiencies.
The Role of Peer Reviewers
Scholarly communication hinges on the (usually) volunteer work of peer reviewers to validate, critique, and polish manuscripts. Probably for as long as reviewers have been asked to perform this role, criticism has been leveled regarding the arbitrariness, bias, speed, and value of the entire process. Nevertheless, no alternative has successfully displaced peer review as the preeminent model for evaluating research papers. Artificial intelligence seems to represent the future, especially if it could be applied to specialist reviewer tasks such as statistical reporting and results testing. But for now, to ensure manuscripts receive some form of vetting ahead of publication, journals still depend upon that most precious of commodities: the skilled peer reviewer. And now journals do this against a backdrop of declining conversion rates of invitations to review into agreements to review. What can be done? Many suggestions to improve reviewer acceptance rates have been proposed, but perhaps journals need to start by looking at how they value the work of reviewers and what they could offer in return for the reviewers that work so hard for them.
The Value Proposition of Peer Reviewers
Peer reviewers come in many forms. Some may be editorial board members, “preferred” reviewers, or part of an “inner pool” of subject experts that may have tacitly, or contractually, agreed to evaluate a given number of papers in a calendar year. The vast majority, however, are likely ad hoc reviewers called upon because at a given moment their research interests match the specifics of a manuscript under evaluation. Other reviewers may possess a more defined role such as statistical/methods reviewers or patient reviewers. These reviewers may be tasked with a specific function. Equally, they may be given the digital equivalent of a blank sheet of paper (no instructions, in other words).
Regardless of who they are and what they bring, peer reviewers are all asked to determine whether a paper is publishable based upon their areas of expertise. Reviewers are also asked to help improve a paper by suggesting amendments or additional ideas to explore. They request clarifications, and correct errors and omissions. Reviewer responsibilities, therefore, are eminently high value and without such consistent effort, the credibility of the entire scholarly communication enterprise wobbles.
Conversely, arguments abound claiming peer review does not offer value, or at least the quality of peer review is so highly variable that the process cannot be unquestionably accepted. Naturally, such claims strike at the core propositions of peer-validated publication: quality, verification and replication. An array of studies does seem to confirm that despite the lofty value-driven goals for peer review, the reality is markedly different.
Core Competencies for Peer Reviewers
The gap between the theoretical value of peer reviewers and the perceived reality as claimed in the aforementioned studies could potentially be explained as a by-product of their work being volunteer generated, though no proof has been presented that paid reviewers perform better. More likely, two persistent issues undermine the process. The first is that reviewers are not always clear why they have been selected to evaluate a paper, nor are they given useful guidance on what a journal expects from them. The second, and of relevance here, is that in many respects peer review is an amateur endeavor. There are very few professional peer reviewers and those that are, are usually specialists like statisticians. There is a general assumption that if you know how to do research, you can do peer review. While that assumption is not completely unreasonable, it is not uncontestable truth either. The ability to critique is a very different skill than the ability to develop study hypotheses, and each invitation received may require something different without making that clear.
Training to evaluate manuscripts is not consistently delivered to trainee researchers despite the value that learning how to deconstruct the work of others can enhance self-analytical skills when it comes to one’s own research. More typically, if any training is given at all to the typical researcher, it is mentor driven and not systematic.
But to develop any form of training requires an understanding of what represents core competencies reviewers should display. Remarkably, understanding these skillsets is hopelessly understudied, at least in a systematic way. However, research is slowly emerging that shows that though no sets of standards exist, there does seem to be broad agreement across journals on what they determine to be useful skill sets. These can be broadly broken down into multiple domains (of which the list below is far from exhaustive):
- Subject knowledge prerequisites and a capacity to contextualize research within existing knowledge frameworks or the current publication record
- Methodological or study design proficiency
- Ability to guide authors on manuscript readability and best practices in the presentation of an argument or results
- Competency in detecting spin and bias
- Basic understanding of ethics
- Understanding of personal accountability, as a reviewer to authors and editors
Training as Engagement
We have seen that peer review is a vital part of the peer review process, though perceptions and some study results may dispute that. We have explored briefly what makes a good reviewer. While journals may offer incentives or rewards for reviewers, most do little beyond the annual listing of individuals completing a review. Reviewers frequently report feeling undervalued, so perhaps in lieu of cash payments, journals could consider offering something of value: namely a free educational opportunity. Indeed a survey conducted by Wiley in 2015 across 170,000 researchers found that 77% of respondents wanted to receive peer review training. Furthermore, rather than just burying any training resources in a barely visited website, journals could actively use training opportunities as a form of engagement.
Indeed, journals should value at a premium any opportunity to foster closer connections with reviewers, who are also likely authors and readers. An initial step might involve identifying particular groups that potentially benefit from training. This pool could be offered training with the promise that it possibly enhances their own prospects of future publication through offering transferable skills. Those that participate can then be tracked and targeted with future communications that promote the journal. Perhaps participants could be highlighted, recognized, or promoted in the journal. Following training, they can also be considered as potential reviewers, which for early career participants is a useful way to raise their profile. With a nascent relationship then developing between trainees and the journal, along with a clearer sense of what being a reviewer means, it does seem plausible that reviewers might be more willing to consider responding positively to a request to peer review, especially if they felt the training (and associated investment of effort) from the journal was beneficial and worthwhile. However, to make this engagement meaningful, trainees would need to understand the true value of the training. Journals, therefore, will likely need to work on such messaging.
Regardless of the subject matter, the process of journals soliciting volunteer peer review comments is breaking down. Traditional rewards are not working and the connections and loyalty that reviewers may once have felt towards certain journals has eroded. For the many new journals that have emerged over the last decade, particularly open access publications, those strong relationships may never have existed, making obtaining reviewers and securing quality peer review comments a persistent challenge.
It might be time for journals to honestly reframe the conversation as a crisis. Most researchers already view peer review as burdensome, albeit one that is necessary. They might not know how hard it has become, however, for journals to find good reviewers. Journals would be well served, therefore, to not only reveal the extent of the problem but also offer ready-made solutions that are meaningful beyond reviewer “thank you” acknowledgements. Developing training programs may sound like a huge effort, and it could be if you go all in with didactic lecture series, mentor-driven seminars, and resource manuals, but it does represent an interesting solution. Equally, your efforts may be as simple as a virtual question-and-answer session with an editor via a webinar or at the annual society meeting. Whatever you attempt, don’t just build something and expect individuals to come. Most won’t. Consider incentives and invest time in your messaging. Explain the benefits. Convey how, for journals and reviewers alike, investing effort in reviewer training benefits everyone in the long term. Use training efforts as an approach to enhance the journal brand: one that invests in its community, one that demands quality but provides pathways to success for all and one that is inclusive.
In an open access future, where authors and readers are less tied to the source, where journals are the sole arbiter of quality research, journals will need to retain attention and loyalty by promoting unique brands and forming meaningful relationships. Training represents a novel opportunity with an evident investment in quality. It also offers the potential of expanding the diversity of the peer review pool.
In the next blog post on this topic, we will review some efforts that have been undertaken by journals, societies, or publishers to train reviewers.
Conflicts of Interest:
None to declare