A Little Rough Data About Journal Refereeing in Philosophy


Is there a refereeing crisis in philosophy? There has been a fair amount of discussion about this over the past couple of months. What was missing from much of this discussion, though, was data. So I asked for some.

I heard from around 40 philosophy journals, but only about half of them were able to provide the kind of information I was after in order to figure out how difficult it is for journal editors to find enough qualified and willing referees: the percentage of those they invite to referee a paper who accept the invitation and write the review.

20 or so journals is not as large a pool of useful data as I was hoping to dive into, but it’s not nothing. What did they say?

The journals that provided helpful data were a varied group in terms of prestige, breadth of subject matter (general/specialist), and popularity of subject matter in the profession. They also varied in the time periods from which they drew their answers, but most of the information was from within the past year (with just a couple stretching further back in time than that). I told the editors that I would not reveal journal-specific information, so in what follows, no particular journals are named.

Across the whole sample I found that, on average, about 40% of the invitations journals send to prospective referees are accepted. About half the journals in this group were within ±7 points of that average. Among the other half, a few were down between 20% and 29% referee acceptance rate, and a few were up between 55 and 60%.

Were there any patterns to which journals had which rates of referees’ acceptance of invitations to review? Given the relatively small sample, it is hard to come to any definitive conclusions. I will say that one thing that’s true of all the journals with rather low referee invitation acceptance rates is that they are not the the best known versions of the type of journal they are. (And by type I mean general or specialist, and if specialist, the specific area.) That said, the generalist journals in this subgroup are definitely known to most philosophers, and the specialist journals in this subgroup are definitely known to the philosophers working in those areas of specialization. So journal obscurity is not doing any explanatory work here. What is? Perhaps it’s higher competition (i.e., more journals) in that space relative to the number of potential reviewers.

One reason to think this explanation has merit is that it’s consistent with the following data, which struck me as odd at first. Since I’m not naming particular journals, I’ll use nicknames for three of them: the Very Good Journal in an Unpopular Area, the Possibly Best Journal in a Popular Area, and the Great Generalist Philosophy Journal. Of these three, the Very Good Journal in an Unpopular Area has the highest referee invitation acceptance rate (in the high 50s): 4 points above the Possibly Best Journal in a Popular Area and 18 points above the Great Generalist Philosophy Journal. The first journal is one of very few journals of its type, and faces little competition for reviewers, while the other two, despite having better reputations in the profession at large, are each one of many of their respective types, and so each face a lot of competition for reviewers.

That said, the supply of potential reviewers is lower for the Very Good Journal in an Unpopular Area than for the other two, owing to differences in the popularity of the areas they cover, so that may count somewhat against the competition explanation. There’s also the possibility that there is a bit of an “underdog” mentality among those working in the unpopular area that motivates them to be more inclined to accept referee requests; or perhaps, knowing that they work in a relatively small area, they feel a relatively less diffuse sense of responsibility for helping with credibility-enhancing institutions in that area, like the journal.

A different possibility is that the Very Good Journal in an Unpopular Area is seen by its specialist community as “their journal”—the one that matters to them and is somewhat definitive for the area it covers, and so members of that community are more motivated to referee for it. Such a possibility was suggested by a journal editor (not associated with any of the above three journals) in reflections they shared with me about their experiences at two different journals: a top generalist journal and a top specialist journal: “Whereas people just about never refused [to referee] for [the specialist journal], the very same people did perhaps a third of the time for [the generalist journal]. It could… be that [the specialists in this area] saw [the specialist journal] as ‘their’ journal in a way that they did not so regard a [top] generalist journal.”

The “competition” and “their journal” explanations, plus some observations about the kinds of journals that had a 30% or smaller referee invitation acceptance rate, suggest (but hardly prove, given the small amount of data) something like the following generalization: if the journal you edit is not among the very first places authors might send their manuscripts, you are going to have more trouble finding referees.

This explanation is compatible with there being multiple reasons for why a journal might be among the very first places an author sends their manuscript: the journal might be especially prestigious or high quality, or be focused on distinctive content, or be the center of professional gravity in a particular subfield (owing to its content, or perhaps to the editor’s influential role in the subfield, apart from the their work for the journal). Correlatively, it suggests possible routes for how journals might improve their referee invitation acceptance rate: excellence, distinctiveness, and influence.

Another possible explanation for disparate rates is offered by the aforementioned editor: “the personal touch made all the difference.” At the specialist journal, the editor would personally email potential referees, and though they used a form letter template for such messages, he thinks they still were more likely to garner positive responses than the auto-generated messages from editorial management programs: “I’m inclined to blame publishers’ insisting on seeking referees through the editorial management software as a major (if not the only) reason editors are having so much more trouble procuring referees.”

Some further information:

  • the journal that reported the highest referee invitation acceptance rate is also a journal that has a reputation for a high rate of desk rejections
  • very few referees agree to referee a paper and then never submit a report (except at one of the responding journals, about which the editor said their rate of referee ghosting was between 10% and 20%)
  • almost no papers at the 20 journals that sent in information were rejected simply because of difficulties in securing referees and reports (again, except at the same journal referenced in the previous bullet point, at which this happens “occasionally”)

Lastly, it’s worth noting that very few journals maintain this kind of data. For almost all of the journals that contributed data for this post, the data had to first be produced. In many cases, editors judged it impossible or too time-consuming to do so, or believed the information was inaccessible owing to publishers’ practices or editorial management software. As with questions about submission topics, demographic data about prospective authors, and related matters, it seems it would be useful for our understanding of our profession (what is happening in it, its challenges, its history, its progress, and so on) were journals in the habit of maintaining and occasionally publishing data relevant to their editorial operations.

USI Switzerland Philosophy
Subscribe
Notify of
guest

9 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Andy
2 years ago

Sorry if it is on the side of this topic. But please, next, call also attention on the elephant in the room: how (un)reliable and poor are referees verdicts?

For example, hardly two referees point to the same flaws in the same paper.
Indeed, if it is unlikely to be assigned the same referee again, one could simply ignore the comments and send the paper to another journal, being pretty confident that the old criticisms will not be offered again.

Or consider the countless cases in which the referee barely reads the paper and moves objections that are explicitly discussed. Or when they just use any insignificant detail as an expedient to give a hasty rejection.

And all of this still ignoring that belonging to the right circle and being in close contact with the editors can also make a difference.

Another Andy
Another Andy
Reply to  Andy
2 years ago

Related: A frustrating pattern I have noticed here with my own submissions (and I have heard similar stories from others) is that when I send papers to the absolute top journals I usually (not always) get very sympathetic and competent referee reports. Unfortunately, at such journals, even a minor worry from a referee can be enough to scupper a paper’s chances because of the sheer number of submissions these journals receive and the limited space they have available. But when I send the same papers to less prestigious journals I get absolute trash referee reports. Its often clear that the referees simply haven’t bothered to properly read the paper. And they are often extremely negative and disparaging.

I’d be interested in whether other people have had similar experiences, and what the explanations might be. One would be that less prestigious journals send papers to less competent referees. Another would be that the fact that one is refereeing for e.g. Nous (and the knowledge that the paper has got past Nous’s editors) primes the referee’s expectations of the paper. A third explanation would be that referees simply don’t care about providing competent reviews for less prestigious journals because they don’t care as much whether the editor thinks less of them (and perhaps don’t care as much about an author who would send a paper to a non-top 5 journal).

Phil Sci Postdoc
Phil Sci Postdoc
Reply to  Another Andy
2 years ago

I must say that I have the exact opposite experience: referee reports from “high prestige” generalist journalist are basically worthless, whereas the referee reports I get from (to pick one example) Synthese are usually extremely helpful.

I find this unsurprising in my case: I work in philosophy of science, and it’s not really surprising that Synthese (say) is better at selecting reviewers in that area than higher prestige journals that aren’t run by philosophers of science and that basically never publish papers in the area.

Happy to Review
Reply to  Phil Sci Postdoc
2 years ago

Synthese is amazing. They must receive 1,000 submissions a year but are still able to get high quality comments to most authors.

Curtis Franks
Reply to  Happy to Review
2 years ago

One time I got a rejection letter back from a poetry journal explaining that they receive over 100,000 submissions a year, read them all themselves, and accept fewer than 1/5 of one percent of them. And they included detailed suggestions for for making the meter more transparent, one reference less obscure, and other venues where I might send the work.

anti-Andy
2 years ago

To the Andys
I have generally had good experiences with journals and refereeing in philosophy. Of course I have had lousy referee reports. But on the whole, I think I get a fair hearing from journals. It is one of the most fair domains in our field. I worked at a far less prestigious place than I work at now, and I still got treated in a fair manner. Further, on reflection, after the pain of rejection heals a bit, I have found that most criticisms raised by referees are relevant and need to be addressed.
I also believe that I provide quality referee reports – I have refereed more than 180 papers (84 of them for the following 5 journals: Phil Sci, Synthese, BJPS, Erkenntnis, and SHPS).
I think things are far less fair with funding agencies and the job market.

semi-Andy
Reply to  anti-Andy
2 years ago

In my experience, with some exceptions, I do find higher-ranked journals give more sympathetic and kindly-worded reports. However, I am with anti-Andy in that I find most reports helpful despite some possibly unjustified harshness, and I am overall very happy with my experience with referee reports in all journals except the two that did not provide reports (one of which is JPhil, and I can’t remember the other).

Yet Another Andy
Reply to  anti-Andy
2 years ago

I’m with Anti-Andy.

Happy to Review
2 years ago

Thank you for this important post, Justin! While there may be a referee shortage in certain corners of the profession, the available data suggests we do not have a full-blown crisis on our hands. The post also contains a helpful suggestion (personalize the request!). I wonder if others have similarly simple and cost-effective tips. A more narrow and focused discussion regarding what we can do on the edges to shore up referee participation – among other things related to publishing – strikes me as especially valuable. I look forward to discussions of this kind that are grounded in data and fewer suggestions of how to restructure all academic publishing (replace journals with a prize system, ban grad students from publishing).