How can we make journal editing more transparent? That’s the question of a timely article in the recent issue of Metaphilosophy, “Why not Open the Black Box of Journal Editing in Philosophy? Make Peer Reviews of Published Papers Available,” by Caroline Schaffalitzky de Muckadell and Esben Nedenskov Petersen (both of the University of Southern Denmark).
They note that there are frequent complaints about the fairness, bias, and reliability of anonymous peer review. Here’s their suggestion:
[P]eer reviews should be made publicly available once a paper is published. This new procedure should include not only reviews from the journal accepting the paper, but also previous reviews which resulted in rejections from other journals. The idea is not to breach anonymity of the reviewers, but to put forward their arguments for public scrutiny. Accordingly, when a paper is accepted, it should be published along with anonymous reviews from the journal it has been accepted by as well as reviews from previous submissions to other journals. Previous versions of the published paper should also be made public to ensure access to the specific material assessed in the earlier reviews.
We argue that this transparency would further attempts to secure fairness and reliability of the reviewing process, or—if it is already fair and reliable— document that it is so and help put an end to misplaced suspicions. In addition, our suggestion can provide information useful for both authors, editors, reviewers, and the academic community as such.
Here are a few more reasons Schaffalitzky de Muckadell and Petersen put forward their suggestion (the numbering is added here for ease of discussion):
- “Transparency can counter reckless and abusive reviews. For instance, editors will have an interest in securing that reviews meet ethical and academic standards when they know that reviews will be made publicly available and so reflect on the journal’s reputation.”
- Seeing the multiple reviews of an article may reveal patterns of subtle bias in its treatment.
- Their suggestion can reveal patterns regarding desk rejection. “If it turned out that a journal often desk rejected papers that were subsequently published unaltered by other journals, this would call for an explanation. And similarly, if an editor tended to desk reject papers with views opposing her own this could be uncovered.”
- It will make it “easier to track correlations between, for instance, initial acceptance and later impact of revised papers. This could be useful in various discussions such as how reviews influence the quality of journal papers.”
- The practice would “provide information on the standards of academic evaluation of quality within the profession” and “may aid discussions about best practice in reviewing.”
It’s not just authors who will benefit from the increased transparency, they argue:
For journal editors the proposal offers several advantages as well. First of all, it is likely that authors will be less likely to submit work they know is unfinished just to get qualified comments on papers in progress. Second, it is a tool that can help secure quality of reviews and to sharpen a journals academic status. And third, and most importantly, the suggestion offers a transparent process to help prevent suspicion of unfairness or epistemic unreliability.
Reviewers, too, may benefit from being “able to compare (and perhaps reevaluate) their own assessment of a paper if they can see other reviews of the same paper.”