“If philosophy relies too heavily on rejection rates as a measure for journal quality or prestige, we run the risk of further degrading the quality of peer review.”
In the following post, Toby Handfield, Professor of Philosophy at Monash University, and Kevin Zollman, Professor of Philosophy and Social and Decision Sciences at Carnegie Mellon University, explain why they believe the common practice of using journal rejection rates as a proxy for journal quality is bad.
This is the second in a series of weekly guest posts by different authors at Daily Nous this summer.
Rejection Rates Should Not Be a Measure of Journal Quality
by Toby Handfield and Kevin Zollman
Ask any philosopher about the state of publishing in academic philosophy and they will complain. Near the top of the list will be the quality of reviews (they’re poor) and rejection rates (they’re high). Indeed, philosophy does have extremely high rejection rates relative to other fields. It’s extremely hard to understand why we have such high rejection rates. Perhaps there is simply more low-quality work in philosophy than other fields. Or, perhaps, rejection rates are themselves something that philosophy journals strive to maintain. Many journals strive to publish only the very best work within their purview, and perhaps they use their rejection rates to show themselves that they are succeeding.
Like many fields, philosophy also has an implicit hierarchy of journals. Of course, people disagree at the margins, but there seems to be widespread agreement among anglophone philosophers (at least) about what counts as a top 5 or top 10 journal. Looking at some (noisy) data about rejection rates, it does appear that the most highly regarded journals have high rejection rates. So, while we complain about rejection rates, we also seem to—directly or indirectly—reward journals that reject often.
It is quite natural to use rejection rates as a kind of proxy for the quality of the journal, especially in a field like philosophy where other qualitative and quantitative measures of quality are somewhat unreliable. We think it is quite common for philosophers to use the rejection rates of journals as a proxy for paper quality when thinking about hiring, promotion, and tenure. It’s impressive when a graduate student has published in The Philosophical Review, in large part because The Philosophical Review rejects so many papers. Rejection rates featured prominently—among many other things—in the recent controversy surrounding the Journal of Political Philosophy.
We, along with co-author Julian García, argue that this might be a dangerous mistake. (This paper is forthcoming in Philosophy of Science—a journal that, we feel obligated to point out, has a high rejection rate.) Our basic argument is that as journals become implicitly or explicitly judged by their rejection rates, the quality of peer review will go down, thus making journals worse. We do so by using a formal model, but the basic idea is not hard to understand.
We start by asking a very basic question: what is it that a journal is striving to achieve? We consider two alternatives: (1) that the journal is trying to maximize the average quality of its published papers or (2) that the journal is trying to maximize its rejection rate. The journal must decide both what threshold counts as good enough for their journal and also how much effort to invest in peer review. They can always make peer review better, but it comes at a cost (something that is all too familiar).
This already shows why judging journals by rejection rates can potentially be quite harmful. If a journal is merely striving to maximize its rejection rate, it doesn’t much care who it rejects. So, it has less incentive to invest in high quality peer review than does a journal that is judged by the average quality of papers in the journal. After all, if a journal only cares about rejection rates, it doesn’t much matter if a rejected paper was good or bad.
This already is probably sufficient to give one pause, but it actually gets much worse. In that quick argument, we implicitly assumed that there was a fixed population of authors who mindlessly submitted to the journal, hoping to get lucky. However, in the real world, authors might be aware of their chance of acceptance and choose not to submit if they regard the effort as not worth the cost.
A journal editor who wants to maintain a high rejection rate now has a problem. If they are too selective, authors of bad papers might opt not to submit, and a paper that isn’t submitted can’t be rejected. If a journal very predictably rejects papers below a given standard, their rejection rates will go down because authors of less good papers will know they don’t stand a chance of being accepted. A journal editor who cares about their journal’s rejection rate will then be motivated to tolerate more error in its peer review process in order to give authors a fighting chance to be accepted. They use their unreliable peer review as a carrot to encourage authors to submit, which in turn allows the journal to keep their rejection rates high.
We consider several variations on our model to demonstrate how this result is robust to different ways that authors might be incentivized to publish in different journals. We would encourage the interested reader to look at the details in the paper.
Of course, our method is to use simplified models, and in doing so we run the risk that a simplification might be driving the results. Most concerning, in our mind, is that our model features a world with only one journal. Philosophy has multiple journals, although in some fields of philosophy a single journal might dominate the area as the premier outlet for work in that area. Future work would need to determine if this is a critical assumption, although our guess is that it is not.
Although we don’t investigate this in our paper, we think that the process we identify might also exist in other selection processes like college and graduate school admission or hiring. In the US, colleges often advertise the selectivity of their admissions process, and we suspect that they face the same perverse incentives we identify.
Whether you share our intuition about this or not, we think the process we identify is concerning. If philosophy relies too heavily on rejection rates as a measure for journal quality or prestige, we run the risk of further degrading the quality of peer review. We think it is potentially problematic that journals sometimes advertise their rejection rates, lest it contribute to rejection rates being a sought after mark of prestige. Furthermore, we think it’s important that philosophy as a discipline walk back its use of rejection rates as a proxy for journal quality. To the extent that we are doing that now, it may actually serve to undermine the very thing we are hoping to achieve.