Journal Rankings — Useful? (guest post by Thom Brooks)

Journal Rankings — Useful? (guest post by Thom Brooks)


The following is a guest post* by Thom Brooks, Professor of Law and Government at Durham University’s Law School, founding editor of the Journal of Moral Philosophy and blogger at The Brooks Blog.


Journal Rankings — Useful? 
by Thom Brooks

I’ve benefited enormously from much invaluable advice over the years that has fed directly into my Publishing Advice for Graduate Students and The Brooks Blog’s Journal Rankings for Philosophy.

The journal rankings scored philosophy journals according to several criteria. These included their ERA rankings from the Australian Research Council, the European Science Foundation’s ‘European Research Index for the Humanities‘, a Leiter Reports ranking conducted by Brian Leiter at his blog, and The Brooks Blog’s survey of 140 philosophy journals and 36,000+ votes.

Some readers will know that my Brooks Blog’s journal rankings bring these different metrics together. Most assessed journals at around the same time so were generally current and most journals were mentioned on several, if not all, these metrics as well. I then divided journals by tier starting with A* (incl. Ethics and Philosophical Review), A (incl. Analysis and APQ), B (incl. Journal of Moral Philosophy and Philosophers’ Imprint), C (incl. Erkenntnis and Review of Metaphysics) and the unfortunately named ‘N/a ranked’ (incl. Metaphilosophy and Philosophy Compass). Full information on how the different lists were scored (with links) and the journal rankings can be found here.

Much time has passed and things have changed. The metrics used nearly 5 years ago are already outdated. There are new journals to consider, such as the APA’s fine journal, and I suspect that a new vote would lead to several changes in how journals are positioned.

I have two questions for readers and I’d greatly welcome feedback. The first is whether such a rankings is useful. I know there are strong opinions on both sides, but unsure what support there is for an updated list.

My second question is—if such a ranking is worth having—whether I should re-launch an updated poll that asks readers to choose the better of two options (and allows repeated votes as different journal pairs are considered) or whether I should release a survey monkey asking readers to rank journals 1 to (at least) 150. The problem with the first option is some will register their preferences much more than others, but the second option may be too unwieldly given the large number of journals around. But maybe it’s not worth doing. Or is it? I’d be very grateful for your comments.

(image: detail of “Counting Radiation X” by Megan Hildebrandt)

Hedgehog Review
Subscribe
Notify of
guest

34 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Kate Norlock
Kate Norlock
8 years ago

Thanks for starting new efforts, Thom Brooks. If I am correct to have the impression that search committees at prestigious institutions value publication in some journals more than others, then yes, a journal rating or ranking (I’d prefer a rating to a ranking system) that we can all read and discuss is very useful. It is an important improvement upon the historical tendency to rely on common knowledge (which was never actually common).

The second option sounds less good, asking for grainy distinctions in the 125-150 range that I would not have great confidence in, but perhaps that’s just me.

Sin nombre
Sin nombre
8 years ago

This might be a naive question but couldn’t we havea ranking system that takes into consideration how often articles from those journals are cited? Maybe that’s already part of the criteria and I’m just unaware but surely that matters in terms of influence.

Also, I don’t know what the metrics are. Maybe I missed something, but knowing them would be helpful in terms of figuring out how useful the rankings are.

A non
A non
8 years ago

In an ideal world, I think we’d have both (1) a subdiscipline-specific ranking of journals, and (2) a general rating system like the one you already have. There’s an apples-to-oranges problem comparing, e.g. a journal devoted to Hume with a generalist journal, so its hard to say that one is better than the other. And in most cases people care almost wholly about the journals in their field anyway. But it is also nice to see that, e.g. the 2nd best journal in my subfield is comparable to the 4th best journal in yours.

There are some logistical benefits to doing subdiscipline rankings, too. I can manageably rank the 8 journals I know in my field, but can’t rank all 150, and can’t say much about journals I rarely read, or only know because of the small number of articles I like from it. Of course, its much more work to create the surveys for these rankings. But presumably doing subfield head-to-heads is less work than head-to-head comparisons between all 150 journals.

Regardless, I think it is worth doing an update of some sort or other. Its useful to know what the common wisdom is, even if its neither common nor wise.

david chalmers
8 years ago

with all respect, i’d say it’s probably not a good idea for a journal editor to be ranking journals. it raises too many questions (e.g. the questions that i see were raised about special treatment for ethics journals in your rankings).

Anon3
Anon3
8 years ago

I think it’s useful and worthwhile as a very rough guide, but something should be done about the way in which specialty journals are mixed in with “generalist” journals, and how specialist journals are compared to each other. I was disappointed, last time, to see that the JAAC only made it in at around 50, and the BJA ( it’s no worse than the JAAC, and some think it’s much better) didn’t even make the list. While I appreciate that these journals serve a relatively maligned (but not actually small) subfield, I do think that their showing last time was symptomatic of problems with the means of assessment.

Tim O'Keefe
8 years ago

I think that there is a serious problem with subdiscipline-specific journals on this ranking. I notice that Phronesis and Oxford Studies in Ancient Philosophy are “rated C” journals. But they’re the top two ancient philosophy journals, most specialists would agree, or at least among the top several. If I’m a young TT ancient specialist, and I’m racking up publications in Phronesis and OSAP, I’m doing quite well. And for many ancient papers, it makes a lot more sense to get them published in those journals (or other journals such as Apeiron or Ancient Philosophy) rather than in so-called generalist journals, even those that do occasionally publish articles in the history of philosophy.

But looking at these journals rankings, the young TT ancient specialist with Phronesis and OSAP publications is going to be compared unfavorably to her colleague down the hall, who specializes in epistemology and is getting publications in APQ, the Canadian Journal of Philosophy, and Synthese. The obvious reply is that this is a misuse of the rankings. But I think it’s a predictable misuse, when some administrators look like a journal rankings list–especially one that purports to be authoritative because it’s a ‘meta-survey’ that draws upon several respected sources–and notices that our ancient specialist is publishing just C-rated and unrated journals, while our epistemologist is getting into A-rated and B-rated journals.

Langdon
Langdon
8 years ago

What Tim O’Keefe says is absolutely right. The idea that the publication of an article in OSAP or Phronesis would be anything other than an A or A+ serves as a “reductio” for the entire rankings.

Anonymous
Anonymous
8 years ago

I’m disappointed not to see Hypatia or any feminist philosophy journals on the list. I think that the list as it stands has a de facto male bias.

Woman in M&E
Woman in M&E
Reply to  Anonymous
8 years ago

I disagree with what Anonymous #8 implies (or, in the parlance of the day, I’m offended by it). It seems to imply that women have a hard time publishing in generalist journals, and that women do feminist philosophy. I’m a woman who publishes in these journals and has no respect for “feminist philosophy”. I don’t think the good journals have a male bias. I suspect that Anonymous #8 doesn’t write papers worthy of publication in a good journal and blames it on other people, probably men.

Woman in M&E
Woman in M&E
Reply to  Justin Weinberg
8 years ago

Sorry for the insult.

Since not all women are feminists, and not all women or feminist philosophers do “feminist philosophy”, then suggesting that the absence of “feminist philosophy” journals in the rankings indicates a male bias is even more absurd. All that one can conclude is that there might be an anti-femist-philosophy bias. But that could even be false if the “feminist philosophy” journals just aren’t that good.

Nick Byrd
8 years ago

The Ranking Problems: There are may problems with rankings, especially with rankings of such a large set of items that seem to fall into separate categories. Some recommended ratings instead of rankings. I agree.

The Static Data Problem: Another foreseeable problem with the kind of resource being proposes is updating. Polls produce a static dataset representing feedback from a select moment in the past. Since quality can change over time, the utility of such static dataset only depreciates over time. Also, replacing old datasets with new one’s fails to account for interesting longitudinal perspectives on journals.

The Editor’s Bias Problem: David Chalmers raises the problem of having editors curate rankings (#4)

To address these problems, I propose the following: a journal rating system that allows (i) raters to update their ratings of a journal whenever they like, (ii) raters indicate their relationship to every journal the are rating [i.e., editor, author, referree, etc.] (iii) ratings to update in real-time as users add/update their ratings, and (iv) users to produce customized reports (e.g., a report that sorts all journals in a particular area according to their overall ratings, excluding ratings from before 2014, excluding ratings from journal editors).

There are a variety of ways to create this kind of web-based resource. Also, this kind of resource could be worked into existing websites like ResearchGate, Academia.edu, PhilPapers, etc.

Various types of ratings could be used (e.g., 5-point likert scale per variable, 10-point, 100-point scale with a letter-grade schema, etc.).

Perhaps some of the variables by which journals should be rated could include the following (in case these variables aren’t already included in various rankings/ratings):

“The papers that I have read from this journal in the past 24 months are of the following quality: ____.”
“The turn-around time for papers I’ve submitted to this journal has been ______.”
“Based on my first-hand experience of the referring process of this journal, the integrity of this journal’s peer-review process seems to be _____.”
“Based on second-hand hearsay about the referring process of this journal, the integrity of this journal’s peer-review process seems to be _____.”
“The likelihood that my students would benefit from reading papers in this journal is ____”

Citations: Citations do not necessarily indicate quality per se. The number of citations a paper receives could be a function of things other than quality (e.g., controversy, fads, etc.). This is not a new point. I mention it only to emphasize a point: if citation metrics are included in a rating system, then it would be all the more important for users of the rating system to be able to remove or discount the weight of citation variables when producing custom reports about journals.

For whatever it’s worth, I’ve proposed a similar system for rating philosophy graduate programs: http://www.byrdnick.com/archives/6451

anongrad
anongrad
8 years ago

Is there any way to incorporate stuff like rejection rates, reviewer comment quality, review times into a new journal ranking system? I’ve always liked getting that sort of data from Andrew Cullison’s journal survey site because it plays a big role in deciding where to submit my work, in addition to rankings based reputation and citations. Having a ranking system that factors that sort of information in would save a lot of us the trouble of seeking it out ourselves (or finding out the hard way). It would also reward the editors of journals who treat authors well, and provide incentives for improvement for others.

Thom Brooks
8 years ago

Keep these great comments coming. I will respond after more feedback is received, but I want to address the point David Chalmers makes above. I started the Journal of Moral Philosophy and was its editor for 10 years, but I stepped down when moving to Durham and my great friend (and fabulous philosopher) Matthew Liao is now its editor. So this worry should not apply.

david chalmers
Reply to  Thom Brooks
8 years ago

thanks, thom. i stand corrected — though i’m inclined to think the point applies equally to founding editors.

recent grad
recent grad
8 years ago

Minor tangent: does anyone know why Cullison’s Journal Surveys site is no longer being updated? I looked at a dozen or so entries, and the latest data were from last December.

Chris Tucker
Reply to  recent grad
8 years ago

I think it’s still being updated, but some of the raw data is updating in odd ways. For the journal you have in mind, check toward the top of the raw data. For some journals, the most recent additions are toward the top rather than at the bottom.

Michael Cholbi
8 years ago

I count myself skeptical of rankings if what is supposed to be captured by them is which journals are ‘best,’ i.e., the journals that contain the highest quality articles. One reason for skepticism has been alluded to earlier: Philosophy is too diverse topically and methodologically for meaningful comparisons. I don’t see any basis for commensurable judgments regarding the quality of articles published in (say) the Journal of Clinical Ethics, the Review of Metaphysics, and Analysis.

My deeper reason for skepticism is that this would amount to an exercise in which respondents report how they perceive the quality or prestige of the various journals, not an informed judgment regarding their contents. I’d guess I’m about average in terms of how many articles I read, but I doubt I’ve read articles from more than 50 different journals in my entire career, and 95% of the articles I have ever read probably come from 25 or so journals. Sure, were I asked, I can vindicate the conventional wisdom that the Phil Review is better than the Southern Journal of Philosophy — but that’s not because I have done any in-depth comparison of the two. I worry, then, that these rankings would be very good at measuring the pings in our disciplinary echo chamber. Philosophers have done a lot to interrogate the epistemic reliability of testimony. We should apply those lessons to rankings of journals (and grad programs, etc.) If we don’t, then I fear that we’re acquiescing in halo effects: ‘That article must be good – after all, it appeared in J Phil!’

I would certainly support dissemination of objective measures of journals (citations), but any rankings would need to come with the emphatic caveat that they are purely reputational.

Roberta L Millstein
8 years ago

I echo many of the concerns here, especially as articulated by Michael Cholbi in #20. I think this is a bad idea. Having data available concerning acceptance rates, time to acceptance, diversity of acceptances, etc — that is useful. Ratings, maybe, but people should not be encouraged to rate journals they don’t read regularly.

Kevin V
Kevin V
8 years ago

In general, I think it is better to have too many rankings than too few (or none). If there is a good objection to those rankings, publicize the objection, formulate an alternative ranking, and proceed from there. In deciding where to send papers, I frequently form a weighted judgment based on various different rankings that we can find. So I support another ranking. I’m not at all worried about data overload here.

Also, I’m not so worried about Dave’s concern, as long as Thom merely sets up a ranking submission system and doesn’t do the rankings himself (which I’m not sure was in the cards).

Dirk Baltzly
8 years ago

It is perhaps worth noting that the Australian research ranking exercise gave up on journal rankings long ago. It uses citation metrics in areas like the physical or bio-medical sciences where these metrics are accepted by members of the research community as measuring something worth measuring. For the Humanities, citation metrics are pretty useless, so there’s peer review of each program’s (self-nominated) best 30%. This suggests what seems to me a general truth: the best way to gauge the contribution of a paper in Philosophy is to read it — or to ask someone with expertise in the relevant field to read it.

The only reason to develop a journal ranking is if academics regard it as *inevitable* that someone’s going to try to quantify excellence by some means or other and we think this should be done by reference to a system that is less flawed than one that people outside the discipline could formulate. For my own part, I’ve been trying to avoid falling prey to the TINA gambit (“There Is No Alternative”) — at least as much as one can and still do your job as head of department. Surely there must be *some* end to the neo-liberal mania for measuring and ranking! In the case of Australia’s own ERA research measurement exercise, I’m frankly a little sickened by the amount of money wasted to tell everyone that … surprise, surprise, surprise … ANU, Monash, Melbourne and Sydney have pretty damned good Philosophy departments. I think anybody in the country who works in Philosophy could have told the government that for a lot less money. Money that could have been spent, say, hiring more philosophers. But that’s not how these things work, is it?. Research rankings are intended to make sure that no money is wasted by giving people who won’t do good research time to do research. This particular kind of waste is uniquely terrible. So terrible that in neo-liberal rationality it is rational to waste vastly larger sums of money to make sure it never happens.

Thom Brooks
8 years ago

Many thanks to everyone who has commented thus far. I’d like to offer some initial remarks on what has been said.

I’m unsurprised to find disagreement insofar as the rating or ranking of academic journals in many disciplines is controversial for many of the reasons highlighted above. While some metrics like citation scores or downloads can be useful, there are not always clear indications of a piece’s excellence. Some fields and subfields may cite work more than others, not all citations are ‘positive’ (I’ve cited Popper on Hegel many times, but never to applaud Popper’s interpretations as one small example) and so on. So I readily accept many of the hazards identified for drawing up a metrics – and that’s before we get into the issue that such metrics don’t reliably exist (at least in the public domain) for many of the journals that should be included for consideration.

I’m also favourable to a rating instead of ranking – so in agreement with Kate Norlock among others. My current ranking is A*, A, B, C and so on. My current thinking – provisionally at this point – is to draw up a list of all philosophy journals with a drop down box next to each whereby they can be scored individually with submission of scores requiring use of an email address as a(n imperfect) means of safeguarding against individuals submitting multiple votes. The list could be up for a while to attract comments to ensure all journals are listed and then circulated through blogs like Daily Nous, etc. with scores tallied.

There is a good point that ranking a journal could differ depending on whether it was considered ‘in general’ versus its subfield. So it might be possible for a journal to be seen as (hypothetically) ‘B’ in general and yet ‘A’ for its subfield. Or so I understand the point. I can’t think of a likely case where two specialist journals considered among all journals where the first is scored higher than the other ‘in general’ would perform differently if considered in their subfield alone (e.g., specialist journal 1 ranks A ‘in general’ but ‘B’ for specialism and specialist journal 2 ranks B ‘in general’ but ‘A’ for specialism). It seems possible subfield rankings might be generated from the general list. But it also seems possible that the real problem for specialist journals is that colleagues not working in a subfield are unable to assess those journals on a roughly equal basis to journals in areas of greater familiarity. So this leads me to think there should be multiple lists: a general philosophy list of all followed by subfield lists to be scored by individuals that claim familiarity with journals in philosophy of science, legal philosophy, etc. I’d be here tempted to draw subfield lists not unlike the groupings found in the Philosophical Gourmet Report.

I still don’t think I disagree with David Chalmers on his point – if I understand it right – about standing. I can certainly see the issue he raises that the reliability of findings might be undermined if the person running the survey is too closely associated with any particular journal or specialism. I think there can be a value in assessing philosophy journals, but also don’t see any particular value in running it – I have nothing at stake. So it would be interesting to see what others think about others or a blog like Daily Nous conducting this survey to gain an updated snapshot about philosophers today think about the relative ratings of different journals. I’m very happy not to do it – as that would save me a lot of time. Plus, ratings/rankings are controversial enough – I wouldn’t want to add to that.

A few last points. Some comments were made about feminist journals. The list on The Brooks Blog is of the top performing journals – to make the list is to do well. People taking the survey I used could add whatever journals they wanted for inclusion and many were, including the journals mentioned. But I also note my agreement (as a closet wannabe classicist) that yes OSAP and Phronesis are excellent journals in my view, too. The ratings are relative: again, to make the list was to do very well and a score of ‘C’ is not meant to indicate average like it meant for our students. I tried to band journals by falling within numerical ranges. This case may well illustrate the usefulness of specialist ratings, in addition to a general list.

I take Michael Cholbi’s points, too. I agree the best way to judge any individual work is to read it. But a rating system can still be valuable.

Now my last point. I’ve long suspected that the ‘prestige’ (for want of a better word) for a journal and its subscription base are linked in ways little explored. What makes publishing in journal x great news may not be merely that journal x is deemed ‘good’ (whatever that means), but rather that its articles are more widely distributed. Older journals can often benefit from more substantial subscription bases and there are several welcome exceptions to the rule that have risen sharply over a short period. But I wonder if distribution should matter. I don’t think it should, but I think for many it does.

Keep the great comments coming!

Dan Dennis
Dan Dennis
8 years ago

There are two main uses of journal rankings.

1) The reader of the rankings seeks to publish in a journal that will enhance his prestige, increase his chances of getting a job, tenure or promotion.
2) The reader is wondering which journals to browse and read, given that he has limited time. He wants to know which is most likely to have really interesting original important work.

One might think that the two things come apart. Thus someone might think, ‘I know journal Z is very prestigious but I generally prefer the articles in journal Y’. One might hope that in time 1 might come to more resemble 2. Would it be too much extra work to construct two rankings?

As long as Thom has a clear, defensible and publicised rationale for the methodology he employs in constructing the rankings I would be happy for him to be in charge, if he is kind enough to take on the extra work.

Thom Brooks
8 years ago

Many thanks for these useful comments, Dan. Not sensing much appetite for a new ratings/rankings of philosophy journals. While I accept many of the limitations (e.g., that they should be viewed in context, there is no substitute for reading the piece, etc.), the downside is that existing rankings will remain all that is available – they are used and increasingly out of date.

Perhaps there should be some kind of Phil Gourmet or similar type exercise? Not a pressing issue, but if done well could be useful.

Carolyn
Carolyn
Reply to  Thom Brooks
8 years ago

I have found your journal ranking useful, Thom, and would welcome an update. My preference would be for a side by side list of rankings, with the criteria carefully explained, rather than an amalgamation. Some options for these rankings include: an opinion poll ranking, an H-index/citation ranking, a selectivity ranking, a diversity ranking (as a sign of fair journal practices and broad interest in the topic), an interdisciplinary ranking (perhaps using citations in other fields as the metric), etc. Having a standalone page for this would be a very helpful resource for the profession. But perhaps it is a large enough task to warrant hiring a graduate assistant and applying for a grant (perhaps from the APA)? The placement project has certainly benefited from taking those steps (which I hope will be apparent in the next month or so, when we are able to make our results public).

Assistant Professor
Assistant Professor
8 years ago

My college-level tenure committee is very hungry for journal rankings. In the absence of rankings, they want citation counts. As as been widely noted, philosophers tend not to cite each other very much, so it’s not surprise my citation count is very low.
I think that a journal ranking (or perhaps better, a journal rating), done carefully and conscientiously, could be very helpful to junior members of the profession. It’s would be great to collect data not only on perceived quality and prestige, but also on editorial practices (e.g. average time to review). I’d be grateful to anyone who undertakes this project.

Professor Plum
Professor Plum
8 years ago

I get that administrators want journal rankings so that they have something objective to “calculate” (God knows they live in fear of actually thinking or judging), but the proper response to this mindset is to stand united against it rather than capitulate to it. Other fields in the humanities resist this sort of administrative-think creep, but philosophers can’t wait to dive right in, ignoring the fact that they risk reifying the status quo and throwing their fellow philosophers (who may work in less popular subfields) under the bus.

Yet another one of the 99+ problems with philosophy.

UK postdoc
UK postdoc
8 years ago

The only point of journal rankings is to brag and for hiring/tenure/whatever committees to judge people. Do they really have any bearing whatsoever on actual academics and the work they do? In other words, who checks out a journal’s ranking before reading it? For that matter, who reads journals? I just search on philpapers or google, and download relevant looking material. I could care less where it’s published, as long as it’s a peer reviewed journal in English.

Really, in 2015 journals are obsolete. No one reads journals. They read articles. What we care about is quality of article. But in philosophy there is so much disagreement about what good philosophy is, I have no idea how we are supposed to judge quality. I’ve read many an article in top journals, I would have rejected if asked to review. And we all know from the peer review process that it’s basically a game of random luck. I split referees all the time, and referees can’t even be consistent over time.

Journal rankings and article quality assessments by ref committees, tenure boards, whatever have no purpose when it comes to actual research. They just serve an artificial system put in place to distribute resources. Sure, we have to distribute resources some way. But no one for a minute should believe the current methods have anything to do with benefiting actual researchers, at least in philosophy. The sciences may very well be different, but I can’t comment on them.

Nick Byrd
Reply to  UK postdoc
8 years ago

Hi UK Postdoc. I’d be curious to know if you think this applies (where applicable) to book publishers as well. I wish you well.

UK postdoc
UK postdoc
Reply to  Nick Byrd
8 years ago

It might apply less so to book publishers, because books are such a big investment of one’s time. Still, when doing my research if I discover a book on a relevant topic that looks interesting, it’s not as if I really care so much whether it’s Oxford or Macmillan.

Really, who reads publishers? People read books. As a researcher who really cares? I don’t.

Hell, even in the popular press no one cares about where your book is published. The author might care, because some presses are better at advertising or whatever. But readers don’t care, or at least not the usual reader. I certainly don’t.

Anono
Anono
Reply to  UK postdoc
8 years ago

There are lots of articles to read on a subject. I can’t read everything. I use the journal that the article is published in as one important signal about the quality of the paper. So, it definitely affects my likelihood of reading a piece (along with other factors, such as seeing it cited or having colleagues suggest it). I think it quite mistaken to think that the journal an article is published in has no bearing on how widely it will be read.

Caligula's Goat
Caligula's Goat
4 years ago

Reviving this post only to say that Google Scholar also ranks philosophy journals and their rankings don’t track any of the other rankings normally cited. I’m not saying this is the best ranking (that, I think is not the right way to think about rankings) but I am saying that it adds yet another interesting way of helping us see which journals tend to congregate to the top of most lists while others seem to matter more to the snowball sampled sorts of lists.

https://scholar.google.com/citations?view_op=top_venues&hl=en&vq=hum_philosophy