Reporting on Editor Performance
Reporting on Editor Performance
By Sherrie Hill & Jason Roberts
For a journal office, it is very important to understand how each of your editors is performing. Are all of your editors evaluating manuscripts in the same way? Are some of the editors slower than the bulk of the editors? Are some of your editors over or underutilized? In Origin Reports, we have developed a chart that presents all this information in both graphical and tabular form.
The Editor Performance bubble chart in undoubtedly information dense. Indeed, it may seem daunting initially. There are three important points to remember to help break down the data:
The size of the bubble relates to the number of papers handled
The x-axis shows the mean number of days it takes for a given editor to handle a paper from assignment through reviewer selection to the delivery of either a recommendation or decision
The y-axis holds information on the rejection rate.
To simplify the chart, this illustrative example uses the Editor Name filter to limit the information displayed to only show four editors. Looking at the y-axis (Rejection Rates) we see that editor Chen has the highest rejection rate of the four editors and rejected all the manuscripts that were assigned to them. Editor Meyers also has a very high rejection rate of approximately 75%, while Johns is rejecting just under 60% of their assigned manuscript. Editor Jensen has not rejected any of the manuscripts assigned to them.
Using the x-axis, we can get information about how fast the editors reach their initial decisions. Editor Myers is by far the fastest and reaches their initial decisions in less than 20 days on average. Editors Johns and Chen both average around 55 days to reach an initial decision. Of this group, editor Jensen takes the longest time to reach an initial decision, averaging over 90 days.
As already mentioned, the size of the bubble also gives us information. The more manuscripts the editor was assigned in the time period (shown in the Parameters section of the chart) the larger their bubble will appear on the chart. From the bubble size, we can tell that editor Johns received the most assignments, while the other editors had fewer assignments.
We can use this information to make some preliminary comments about these four editors’ performance. In general, editors Williams and Taylor appear to be more stringent when they evaluate the manuscripts assigned to them and might be rejecting at a rate that is encouraged by the journal. Editor Lopez might need some additional training to understand the journal’s criteria for rejecting manuscripts. Editor Jensen might be a new editor or their slowness could be caused by low assignment volume. Though we can get a general idea of how these four editors are performing, we really need to add the rest of the journal’s editors to get a full picture, since we often consider editor performance as a function of how they perform relative to all of the editors for a journal. With all of the editor bubbles shown on the chart, we can now start to look for outliers and try to determine what that indicates.
When trying to interpret the editor’s performance regarding their rejection rate, you need to consider the goals for your journal. Many journals set reject rate goals or at least have an idea of what they expect to see for rejection rates based on the volume of their submission volume. Considering the expected rejection rate, you can see which editors have rejection rates outside of the journal’s norm. This might indicate editors who need additional information from the Editor-in-Chief or training to help them align with the rest of the editorial staff, especially if they are new to their role with your journal.
If your journal does not yet have an idea of what they expect regarding rejection rates, this chart is a good place to start. You can use the chart to determine where the editor bubbles are clustered to determine the rejection rate for most of your editors. Then your editorial team can decide to whether to continue to use that rejection rate range going forward to make adjustments to the editor’s criteria to either allow more or fewer manuscripts into your peer review process.
We also look for editors who are taking longer to reach their initial decisions than the rest of your editorial team. Again, your journal may have a goal for time it takes a manuscript to go through peer review and obtain an initial decision. If not, this bubble chart might be a good starting point to help your journal set goals and track progress over time. If your journal has a goal of 2 ½ months to initial decision, you can see that most of the editors are able to reach an initial decision within that time period. In the bubble chart above, five editors have averages outside this window.
For editors with fewer assignments, this is not unexpected. Many journals find that editors who are underutilized often demonstrate a slower time to reach decisions since they are usually not interacting with the submission system software as frequently or may not have as strong of a relationship with a pool of reviewers. Of more concern is editor Santos, since they have a large number of manuscripts that were assigned to them. For this particular editor, this would be time to dig deeper and discover possible causes. Is this editor historically slower? Have their assignments significantly increased over time to the point where it is hard for them to keep up with their workload? Are they routinely assigned manuscripts that have a specialty that inherently takes longer to evaluate, such as statistical analysis or very involved medical research projects? Did they have a few manuscripts that took much longer than the other assignments to reach an initial decision which skewed their average time? Since this editor affects the timeline of a larger number of manuscripts, determining if there is something that could improve their time to initial decision would be beneficial to the journal’s overall time.
Depending on the number of editors that you have working for your journal, the editor bubble chart can become overcrowded. We suggest using the Editor Role and Editor Name filters to thin out the data shown on the chart. You could try showing a limited number of editor roles per chart. If the chart still appears densely populated, try grouping editors by specialty. If you have a first look or triage editor that evaluates all incoming manuscripts, it is generally a good idea to exclude them from the chart since the size of their bubble will cause the other bubbles to show as significantly smaller, which will make it harder to distinguish size differences between them. You may wish to exclude specialty editors that only get a few assignments per year. If you have guest editors, you may also wish to exclude them since they are not typically as efficient as your regular editorial staff, which might skew the timeline making the x-axis range larger than it would be without them. Equally, and as a confounder, they may be working outside the normal journal workflow, perhaps holding off on making a decision if they are handling a group of specially commissioned materials until all manuscripts are ready for a decision.
Regardless of the number of editors working for your journal, the editor performance bubble chart can be a valuable resource to determining the current editor performance and helping your editorial team set goals for the future!