A new article in Synthese presents two new rankings of philosophy journals—a survey ranking and a composite of several existing rankings—and discusses their strengths and weaknesses.
The paper, “Ranking philosophy journals: a meta-ranking and a new survey ranking,” is by Boudewijn de Bruin (Groningen, Gothenburg).
While philosophers seem to care a lot about rankings, de Bruin says, we could stand to pay more attention to how these rankings are determined:
We think a lot about our own field—more, perhaps, than people working in other academic disciplines. This may be unsurprising, because philosophy, unlike many other fields, has an immense array of tools that facilitate such thinking. But when it comes to journal rankings, we are far less self-reflective than most other scholars. In many fields, journal rankings are published in the best journals, and are continuously evaluated, criticized, revised, and regularly updated. Philosophy, by contrast, has no ranking rigorously developed on the basis of up-to-date bibliometric and scientometric conventions.
To address this, de Bruin compiled a meta-ranking of philosophy journals based on survey data collected by Brian Leiter (“general” philosophy journals, 2018), Scopus and Scimago (2019), Google Scholar (collected using Publish or Perish, data collection in 2021), Google Scholar (2019), and Web of Science (2019).
Here they are:In addition to compiling the meta-ranking, de Bruin also conducted his own survey, which was completed by 351 respondents:
The survey started with a brief explanation. Participants were informed that they were asked to rate journals on a scale from 1 (“low quality”) to 5 (“high quality”), and that we were interested in their “personal assessment of the journal’s quality,” and not in their assessment of “the journal’s reputation in the philosophy community.” Furthermore, it was stated that if a participant is “insufficiently familiar with the journal to assess its quality,” they should select the sixth option, “Unfamiliar with journal.” It was also made clear that the survey was fully anonymous, and that data will be retrieved, stored, processed, and analyzed in conformance with all applicable rules and regulations, and that participants could voluntarily cease cooperation at any stage. Then respondents were asked to assess the quality of all journals. The journals appeared in random order, which is the received strategy to control for decreasing interest among participants and for fatigue bias. Subsequently, participants were asked to provide information about their affiliation with journals (editorial board member, reviewer/referee, author), and a number of demographic questions were asked (gender, age, ethnicity/race, country of residence, area of specialization, number of refereed publications, etc.). An open question with space for comments concluded the survey.
Here are the results of de Bruin’s own survey:Some remarks from the author about his survey:
“If we consider not the mere position in the ranking, but the absolute values of the average quality that respondents assign to the individual journals, we see that there is not a lot of variation between the perceived quality of the top 10 journals.”
“The journals that respondents had to rank do not represent the whole spectrum of philosophy journals.”
“There is no indication that the perception of journal quality depends on gender, ethnicity, or country of origin.”
And about the two kinds of ranking:
“Our meta-ranking, rather than our survey-based ranking, may prove most relevant to the philosophical community…. [A] meta-ranking is much less prone to be influenced by bias.”
“Philosophy is highly diverse when it comes to methods and traditions, but methods and traditions are unevenly distributed across journals.”
“Further meta-rankings should be developed for a much wider range of journals and subfields of philosophy, and surveys should cover a much larger variety of philosophers from subdisciplines (and be more diverse on other relevant dimensions just as well).”
de Bruin also discusses limitations and criticisms of the rankings, as well as guidance and cautions regarding the use of rankings generally. The full paper is here.
(Thanks to several readers for letting me know about de Bruin’s article.)