Reputational Ranking of Philosophy PhD Programs Updated


The Philosophical Gourmet Report (PGR), a ranking of philosophy PhD programs in the “English-speaking world,” has been updated.The 2021-22 rankings are the based on a survey of philosophy faculty that asks each of them to evaluate the members of 94 philosophy departments from the United States, Canada, United Kingdom, Australia, New Zealand, and Singapore. The results of the survey come from 220 surveys that were at least partially completed.

According to the survey, the following are the top 50 philosophy faculties in the English-speaking world:

A ranking of PhD programs in philosophy in the English-speaking world from the Philosophical Gourmet Report

The rest of the overall rankings, including country-specific lists, are here. The report also publishes rankings of departments within various subfields of philosophy.

The PGR is edited by the team of  Christopher Pynes (Western Illinois University) and Berit Brogaard (University of Miami) [see update 1, below]. It was created by Brian Leiter (University of Chicago), who stepped down from the PGR’s helm following controversy regarding his treatment of some philosophers.

In the past, concerns have been raised about the methodology of the PGR (see some of the links in this post). It is unclear whether the current report has been improved in any of these regards. (Comments on this from those in the know, including the current PGR editors, are welcome.)

The PGR is one of a few resources available to prospective graduate students in philosophy as they choose which programs to apply to and attend. Other resources include Academic Philosophy Data and Analysis (APDA), the Pluralist’s Guide to Philosophy, and the QS Rankings.

UPDATE 1 (1/8/22): I am informed by Berit Brogaard that she is no longer associated with the Philosophical Gourmet Report. Further details about this will be forthcoming.

Subscribe
Notify of
guest

41 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Filippo Contesi
2 years ago

The phrase ‘the English-speaking world’ used by the PGR has always seemed less-than-felicitous to me. First, it gives one the impression that it refers to an entire “world” of its own, separate from other “worlds”. Moreover, it is not obviously accurate: (1) plenty of people speak other languages in majority native Anglophone countries; (2) plenty of people speak English in majority non-native Anglophone countries, and indeed in these latter countries there are several Philosophy PhD programmes that work with English as their default language, but are not considered for ranking in the PGR.

Last edited 2 years ago by Filippo Contesi
Defund the language police
Defund the language police
Reply to  Filippo Contesi
2 years ago

‘The English-speaking world’ is a conventional phrase that refers to a specific set of countries. It in no way implies that only English is spoken in those countries nor that no one speaks English in other countries.

Not really about language policing
Not really about language policing
Reply to  Defund the language police
2 years ago

But this misses Filippo’s point that there are many good English speaking graduate programs that could’ve been ranked but aren’t because they’re in non-Anglophone countries,. E.g., LMU in Munich presumably would be ranked in decision and game theory, etc. Wouldn’t that be useful for prospective graduate students to know how the evaluators think about such programs?

Defund the language police
Defund the language police

That may be legitimate criticism of the PGR, but it is irrelevant to the usage of ‘the English-speaking world’ to describe the countries in which programs are currently ranked in the PGR.

Yael
Yael
Reply to  Defund the language police
2 years ago

It seems to me that the question of whether it is relevant or not very much depends on the argumentative work that the phrasing “conventional phrase” is being asked to do. It could be interpreted as descriptive, but could just as well demonstrate Filippo’s point

Matt L

<i>E.g., LMU in Munich presumably would be ranked in decision and game theory, etc.</i>

For what it’s worth, LMU is listed as a program “not evaluated but recommended for consideration by the advisory board” in general philosophy of science, philosophy of physics, and game theory/rational choice theory (and maybe others – I didn’t check every category.) My understanding is that there is a desire to keep the task of ranking schools to something close to manageable, and so there are some choices to be made. Where they should be made can be debated, of course. But at least in this case, that LMU is worth looking in to is noted, for just the reasons you suggest.

T20 or bust
2 years ago

The PGR has always seemed to be a bit of inside baseball. As a low income, first gen student in the field, it’s disheartening to see how the exclusivity decried by practitioners in the field is perpetuated by vacuous discourse on reputation, “quality of programs,” etc.

Also First-Gen
Reply to  T20 or bust
2 years ago

As another data point: I am also a first gen student. I am also an international student from a non-English speaking country. PGR has its flaws some of which have been discussed here and in other places, but I value the PGR. I think every field — and especially every academic field — is ordered by a prestige hierarchy (which may or may not be directly correlated to the quality of the program, placements, etc.) and insofar as the PGR does what it says on the tin, it is a very useful resource of information; information which I (or the professors at my undergraduate institution) could never have been privy to. A public syndicate is much better than a private syndicate. Will a prefer a flat academic landscape with no syndicates? (I think) Yes. Given that — at least in the present — philosophy academia has syndicates, does PGR serve as a helpful guide? Very much so (or at least very much so to at least one first-gen student).

T20 or bust
Reply to  Also First-Gen
2 years ago

Your point is well taken!

Andy Stroble
2 years ago

I am so glad that the most recent rankings of a law professor at the University of Chicago has finally come out! I will now know that my alma mater sucks, and that comparative and Asian philosophy is junk. But, I expected nothing less. Why does the philosophical community even pay any attention to this claptrap, at all? Serious question.

Thomas
Thomas
Reply to  Andy Stroble
2 years ago

…because it’s extremely valuable to many grad school applicants…which is its stated purpose…which makes one wonder why you need to ask…serious wonderment.

David Hyder
2 years ago

While I will only comment here on one of my own subfields, Kant, quite a bit can be said.

1. Of the top-ranked programs, only one, Brown, actually has a leading Kant-scholar (Guyer) worth studying with. But he is close to retirement.
2. The other leading Kant scholar in US/UK has now returned to Paris.
3. Princeton has no Kant coverage. None. Princeton has been in the news recently only for linking Kant to critical race theory.
4. This is traceable to the evaluators. Most are associated with the “top ranked”programs themselves.

In other words, the rankings are a grim exercise in nepotism. Not all of the referees are guilty of this, no doubt. On the other hand, as the old saying goes, „unschuldig sind sie schuldig geworden.“

Matt L
Reply to  David Hyder
2 years ago

Princeton has no Kant coverage. None.

Desmond Hogan https://scholar.princeton.edu/deshogan/ and Andrew Chignell https://chignell.net/ count as “No Kant coverage. None”? I mean, you can have differences of opinion as to the value of different people’s work, but if you think that’s “no Kant coverage”, well, then, I suppose we might doubt your opinion. Really, it’s necessary to do at least _a bit_ of work before criticizing things.

Alastair Norcross
Reply to  David Hyder
2 years ago

“This is traceable to the evaluators. Most are associated with the “top ranked”programs themselves. 
In other words, the rankings are a grim exercise in nepotism.”
Evaluators rank neither the program where they received their doctorate nor the program where they are employed. If you think different people should be doing the evaluating, you could suggest names to the advisory board (who are listed on the website). It is a lot easier to complain, though, than to do something about what you perceive to be a problem.

Lewis Powell
Lewis Powell
Reply to  Alastair Norcross
2 years ago

Not everyone listed on the advisory board is necessarily aware that this is the case, in my experience. For example, I emailed an advisory board member once to inquire whether they knew why the PGR website didn’t host its own updates, since setting up a blog is fairly trivial, and instead links to a former lead editor’s personal blog, and they informed me they didn’t even realize they were still a board member (somewhat unsurprising, as the site doesn’t have up to date information about the current editorial team, and most of the information about the rankings was first published on that same blog before being published on the official site). It’s almost as if that site is not a reliable and up to date source of information about the rankings, compared to the blog of the former lead editor.

Last edited 2 years ago by Lewis Powell
Conrad
Reply to  David Hyder
2 years ago

Hi David! Just a friendly reminder that comments can be edited, even after they’ve been posted.

Alastair Norcross
Reply to  Conrad
2 years ago

That must be the friendliest call for a retraction that I have ever seen.

Conrad
Reply to  Alastair Norcross
2 years ago

It was written by a fellow brit, Alastair, so factor in passive aggressive politeness as appropriate.

Michel
2 years ago

I don’t really understand why the subfields with too few evaluators to be ranked have too few evaluators to be ranked.

I mean, I know it’s because they’re supposed to be drawn from the pool of people invited to evaluate all programs overall (which ultimately doesn’t have many people working primarily or significantly in those subfields), but I don’t really understand why this should be the case.

Why can’t the PGR just invite a large pool of specialists working in each subfield to evaluate those subfields? It’s not very hard to find qualified feminist philosophers or philosophers of art, for example.

(FWIW, this is a legitimate question rather than a complaint. I just don’t understand why it’s better this way, methodologically.)

Alastair Norcross
Reply to  Michel
2 years ago

My understanding, and I could be wrong, is that the PGR people do invite a large enough pool of specialists for all these subfields to be ranked, if enough people accepted the invitation. Plenty of people who are asked to be evaluators decline. It’s also possible that people in certain subfields are more likely to be hostile to the PGR itself (not an uncommon attitude, as this discussion demonstrates), and so less likely to accept the invitation to be reviewers. Perhaps you are suggesting that the PGR people should specifically invite some people just to be speciality evaluators, and not to do general evaluations? If so, I suspect that would come off as insulting (“we trust you to evaluate your peculiar subfield, but not to evaluate anyone else”). Evaluators are told that they don’t have to provide rankings for departments about which they aren’t confident in their judgment, so people could, I think, provide speciality rankings without overall rankings.

Michel
Reply to  Alastair Norcross
2 years ago

Thanks, that makes more sense now.

Joe
Joe
Reply to  Alastair Norcross
2 years ago

In my experience, the specialty rankings are the more useful feature of the PGR and it would be even more useful of more people were involved in providing the ranking. I am thinking for example about the history rankings that I can somewhat judge – around 10 people in some seems very little. For example, I don’t really find Ancient Phil ranked programs wrong (I think they are all supposed to be there), but the actual ranking of them seems to me.open to possible shifts if more people were involved, esp as they cannot judge their own departments (and Ancient is not a huge field). Also, why not judge your own PhD department is sufficient time has passed since pHd?

Alastair Norcross
Reply to  Joe
2 years ago

I agree about the speciality rankings being more useful than the general ranking. I still think the rule about not ranking your own Ph.D institution is a good one, but I can see the other side of that, especially in cases in which, as you say, sufficient time has passed (which it often will have).

Joe
Joe
Reply to  Alastair Norcross
2 years ago

I think it is a pity that people decline to rank. The more people take part, the more useful PGR as a source of info for potential grad students. Ideally, I think about three times as many as do now should do it.

Julian
2 years ago

Finally! Now I can find out who’s the wisest! Looks like Rutgers loves wisdom more than Princeton!

How embarassing that adults who call themselves “philosophers” are engaged in this kind of activity. Thank god it looks like there’s a wave of resignations and this thing look to be on its last lefts after “philosophers” do an ounce of circumspection.

Matt L
Reply to  Julian
2 years ago

Finally! Now I can find out who’s the wisest! Looks like Rutgers loves wisdom more than Princeton!

It would indeed be odd, and maybe embarrassing, to try to judge who is the “wisest” or who “loves wisdom more”, by the means that PGR uses, but because the PGR doesn’t purport or attempt to do these things, this doesn’t seem like a very strong objection to it.

Julian
Julian
Reply to  Matt L
2 years ago

Appreciate the reply Matt, I think you raise an important question which was the reason for my post. What exactly is it that the PGR “purports” to do?

How would you “rank” philosophy departments…it’s a bit like asserting “well obviously Kant is “better” than Plato”. Such a proposition is silly and nonsensical for the same reason the PGR is silly and nonsensical. It means nothing and simply shows the preferences and biases of a small set of people (in this case looks like lots of fans of Anglo-American M&E of the past 50 years)

That said if someone were to come to me and say “where’s the best place to study philosophy” I couldn’t in good conscience say “check the PGR” I’d have to say “well, it depends, what are you interested in? What is your financial and personal situation? You should read and study wherever you can and here’s something I think you might like, etc.”

That this sort of status seeking and commodification of philosophy into luxury brands is probably an artifact of larger social ills, the fact that is was started by someone who calls themselves a Marxist is hilarious. Fortunately, as I said it looks to be on its way out, I don’t hear about it nearly as much as I used to, good riddance.

Last edited 2 years ago by Julian
Matt L
Reply to  Julian
2 years ago

…an important question which was the reason for my post. What exactly is it that the PGR “purports” to do?

A good place to get an idea about this is to look at the section of the report called “What the Rankings Mean”, which can be found here: https://www.philosophicalgourmet.com/what-the-rankings-mean/ I don’t think this supports the reading you give it very well.

Moti Gorin
Reply to  Julian
2 years ago

In my experience, the PGR allows all students to discover things about departments and programs that would otherwise be known only to students from more elite programs or with well-connected and professionally active mentors. In this sense it plays an equalizing role, despite lots of claims to the contrary. And it is this sense that is most important, since its central role is guiding prospective grad students.

“What are you into” and “have you looked at x” may be great places to start but that method will hardly overcome whatever limitations attend the PGR.

Julian
Julian
Reply to  Moti Gorin
2 years ago

Sorry Moti, but let’s be honest about this, PGR is a way for elite departments to publicize their status. If you as a grad student aren’t in in that club of elite programs or the “well connected” the PGR isn’t going to remedy that for you.

This is, at best a kind of conspicuous consumption where departments get to flaunt their status and occasionally do a little interdepartmental gossip.

I just can’t abide the idea that this is some kind of “public service”. I’m sure students knew Princeton was a good place to study before the PGR.

The PGR actually goes a long way to reinforcing status, and justifications sound a lot like Gentrification apologists telling you how nice they’re making the neighbourhood by keeping out all those poors .

Last edited 2 years ago by Julian
Lowlygrad
Lowlygrad
Reply to  Julian
2 years ago

I don’t think the average undergraduate would expect Rutgers to be a top school…

Moti Gorin
Reply to  Julian
2 years ago

Ok, fine, I’ll be honest.

In my experience, the PGR allows all students to discover things about departments and programs that would otherwise be known only to students from more elite programs or with well-connected and professionally active mentors. In this sense it plays an equalizing role, despite lots of claims to the contrary. And it is this sense that is most important, since its central role is guiding prospective grad students.

“What are you into” and “have you looked at x” may be great places to start but that method will hardly overcome whatever limitations attend the PGR.

Last edited 2 years ago by Moti Gorin
Chris Bertram
2 years ago

After someone mentioned on FB the gender balance on the PGR political philosophy panel (24:1), I had a look at the full range of panels. I’ve not totted things up, and there are one or two ambiguous names, but the pattern is pretty general – women not just outnumbered but overwhelmingly outnumbered – and there are several manels. The only panel where there is a majority of women evaluators is “feminist philosophy” where there are only three evaluators, one of whom is Les Green. Maybe the advisory board thinks it doesn’t matter. But whether or not people think it matters, the pattern is very striking.

Benj
Reply to  Chris Bertram
2 years ago

There are no longer designated ‘panels’ for subdisciplinary evaluation. The evaluation software permits any invited evaluator to assess any program in any subdiscipline. If there are ‘manels’ for a subdiscipline, this is because among the women invited to evaluate, none chose to evaluate that subdiscipline.

Chris Bertram
Reply to  Benj
2 years ago

I’m not sure what to make of that reply. There were very few women in the great pool of evaluators to start with, and if self-selection meant that some subdisciplinary areas were entirely evaluated by men (and nearly all overwhelmingly were), that strikes me as a major design fault in the entire project,

Mary Leng
Mary Leng
Reply to  Chris Bertram
2 years ago

I was invited to evaluate and have done so in the past. But it’s a very time consuming job and this year I’m still dealing with the fallout of lockdown home schooling and didn’t think it was the best use of my time. Just one case of course, but I wonder how many other women invited to evaluate found themselves making a similar call this year.

Benj
Reply to  Chris Bertram
2 years ago

’What to make of’ my reply? Your rumination on whether the gender balance of ‘panels’ ‘matters’ to the advisory board suggested that you thought the advisory board has direct control over the makeup of the ‘panels’: as my reply observes, this is not true.
You now change the topic to the gender balance of the ‘great pool of evaluators’: the advisory board of course has direct control over the greater pool of the *invited* evaluators (which, to my understanding, is in a typical year around 1/3 to 1/2 women), but of course lacks direct control over who accepts the invitation.
You contend that the gender imbalance of some ‘panels’ counts as a ‘major design flaw in the entire project’. But if broader trends of opinion have led to secular decline in the response rate among women, that scarcely impugns the ‘design’ of the ‘project’. More importantly, there could only be a ‘flaw’ here if women and men rank differently, and there is no reason to think this.

Chris Bertram
Reply to  Benj
2 years ago

Well, if you think that it isn’t a problem that these massive gender imbalances emerge because they only emerged thanks to individual choice ….

Benj
Reply to  Chris Bertram
2 years ago

‘If p, …’, you say. ‘Then’ what?

T20 or bust
2 years ago

The PGR is either a useful tool for a broken machine or a useless tool for a broken machine. Philosophy will waste away (rather quickly) if it continues to be conducted as a hyper-specialized practice within the walls of bourgeois institutions.

A Grad Student
2 years ago

I have a couple of question that I am hoping someone can help me with. On the “Methods and Criteria” page of the PGR’s website, it says “note that there are some 110 PhD-granting programs in the U.S. alone, but it would be unduly burdensome for evaluators to ask them to evaluate all these programs each year. The top programs in each region were selected for evaluation…” I am wondering 1) how the top programs in each region (those worthy of evaluation) are determined, 2) How the regions are defined, 3) how many programs are in each region, and 4) how many programs are selected from each region.

It seems like this system could lead to a situation in which higher prestige programs from regions with many high prestige programs are left off (the way that teams with better records are left off of playoff rosters because they are in better conferences). Likewise, it seems like we would want to know what considerations go into deciding whether a program is among the top of its region or not. The point of the survey is to figure out what the top programs are, but those deciding which programs to evaluate must already know what the top programs are because they make sure to only evaluate them.

If the criteria is just that they were ranked highly on previous versions of the report then it runs the risk of merely perpetuating the prestige of programs that were highly ranked in earlier versions and just shuffling the same programs around unless a new program happens to win the “test the waters” lottery at the right time. This would require the timing of their being evaluated to happen to coincide with their department being strong whereas the ‘top programs’ must only be strong (and get the presumption of evaluation). These top schools will also get the advantage of always appearing in the report which, if nothing else, might guarantee their evaluation in the next report.

It also seems like it would be good to know how the regions are defined, along with how many programs are in each region and how many ‘top program’ slots are allotted to each region. If the regions have different numbers of programs, then programs in smaller regions (those with fewer programs) stand a better chance of being evaluated all things being equal (which seems irrelevant to what the survey is supposed to be getting at). Maybe they select something like the top 10%? it would be good to know. Likewise, without transparency about this decision-making process, how do we know whether those deciding which programs to evaluate aren’t making exceptions to whatever rule they have in place to determine this and giving certain regions more evaluation slots so as to allow programs that they already know (prior to evaluation) are top programs.

I am not sure what the justification is for the region/top program system or how it works, but I would like to know. Thanks for any help anyone can give in answering these questions!