The PGR’s Technical Problems (updated)

The PGR’s Technical Problems (updated)


Several recent posts here have discussed questions regarding the leadership of the Philosophical Gourmet Report (PGR), the best known ranking of philosophy graduate programs, with some discussion of what an alternative to the PGR might look like. In the meanwhile, discussions continue between the creator and current editor of the PGR, Brian Leiter (Chicago), and representatives of a majority of the PGR’s board members who have signed a letter asking Leiter to step down.

Alongside all of this have been discussion about what the PGR purports to measure, and whether it does this well. Though it has been mentioned in a few threads, it is worth drawing readers’ attention to “Our Naked Emperor: The Philosophical Gourmet Report,” by Zachary Ernst, a short and well-written piece explaining some concerns about the soundness of the PGR. Ernst writes:

It is my contention that the Report is not merely unsound as a ranking system and detrimental to the profession; it is obviously unsound as a ranking system and obviously detrimental to the profession. Indeed, its flaws are so obvious that it would seem to be unnecessary to discuss them. However, the Report is also an institution unto itself. It is so deeply entrenched into the profession of academic philosophy that otherwise highly intelligent and critical professionals seem to have developed a blind spot to it. Indeed, the Report’s flaws are so obvious and so severe that I find it embarrassing to be influenced by it, even unwillingly.

In addition, Gregory Wheeler, at Choice & Inference, has a series of posts on the measurement issues in the PGR, particularly concerning sampling and representativeness, leading him to conclude that “while the results of the survey might be accidentally true, in a Gettier sort of way, there is no reason to believe they are true,” and to recommend abolishing the PGR.

All of this makes for an excellent case study for those working on issues at the intersection of philosophy of social science and ethics.

These technical complaints, along with the other previously mentioned concerns, have led some to call for “No Rankings, Not Now, Not Ever.” That will take you to John Protevi’s blog, where he links to some other critiques, and asks people to join him in expressing opposition to rankings.

UPDATE: Interested readers should check out the Philosophers Anon and Philosophy Smoker posts on these issues. The latter has several helpful links, too.

Use innovative tools to teach clear and courageous thinking
Subscribe
Notify of
guest

6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Fritz J. McDonald
9 years ago

Just as a comment: it seems like the conversation has moved on in a lot of the blogs to discussions of whether or not we should have rankings, or whether or not we should have different sorts of rankings. These are relevant issues. I like having as much relevant information public as possible, and the Gourmet Report provides some relevant information. (It tells us what the evaluators think about the ranked departments.) It would be nice, given the events that initiated many of these discussions, to have discussions about civility and professionalism as well. To put my own view on this briefly: we should all be nicer to each other. (Some of the posters on Feminist Philosophers have addressed this issue).

j/k
j/k
9 years ago

Golly, Mr. Ernst is fond of extreme and combative language. I certainly don’t think the PGR is obviously unsound and detrimental to the profession. But maybe I’m so stupid that I can’t see the obvious? In any case, I find the following things puzzling, given Ernst’s rhetoric:

1. Ernst begins by negatively comparing the PGR to the US News rankings when…the largest component of the US News rankings is a repetitional survey! (Well, it’s tied for largest.)
2. Ernst claims that the PGR doesn’t measure anything meaningful, and whatever it measures it doesn’t measure well. But he also claims that the PGR measures the strengths and weaknesses of his own department accurately. Of course, that could just be a coincidence, but…
3. The “objective data” that Ernst thinks decisions should be based on is not completely objective, or not very useful: merely knowing that University X has 5 people working on metaphysics, and that those five people have published five papers each, really doesn’t give one much information about whether University X is a good place to study metaphysics. Were those publications any good? An undergrad would be hard pressed to determine that. She might use a repetitional survey of journals, but that will only work if repetitional surveys work. She might look at the record of metaphysics students at University X, but that will be very misleading if the five metaphysicians have been hired in the last five years. What our undergrad wants to know is whether University X is a good place for her to go, now, to study metaphysics. Knowing University X’s specialty ranking in metaphysics is very useful in that regard. Note that even if some metaphysician at University X has gamed the speciality rankings by getting her metaphysician friends to vote up University X, that means that someone at University X has a lot of pull in the metaphysics community…the exact kind of person our student might want on her dissertation committee. Note that I’m not endorsing strategic voting, I’m just saying that anyone who could radically manipulate their university’s specialty rank by pulling strings with other members of that speciality can probably get their students jobs by pulling those same strings.
4. Of course, all that objective data could be sensibly combined with the PGR to get a better overall picture.
5. If the PGR isn’t tracking what it is supposed to establish–if it is very unreliable–it should be relatively easy to point to examples of where the PGR makes egregious errors. I don’t expect uncontroversial examples. But if the PGR is that bad, there should be a good number of plausible examples of where it has gone wrong. I’m not saying there aren’t any, I’m just saying that anyone who thinks the PGR is useless should produce some.
6. It was never made clear to me what the problem was with the idea that Leiter uses snowball sampling because he wants philosophers that are “in the know” to be the ones filling out the surveys. Of course, if Leiter is totally wrong about who is “in the know” then this method will be a disaster. But if the PGR is a disaster, then, as I said above, it should be relatively easy to point to places where it makes egregious errors in the overall or specialty rankings.

anon grad student
anon grad student
9 years ago

Those who call for abolition of all rankings seem to be blind to the reality that there always have been, and always will be, some sort of ranking. If it’s not public, then in the form of gossip, received opinion, etc. The ‘old boys club’ some see in the PGR certainly did exist before. Isn’t some kind of open system preferable? Whether that includes an overall ranking or just speciaty rankings, whether it is a PGR-style list, or a customizable data base, is up for debate. But it seems very clear to me that some kind of public ‘ranking’ – ideally based on verifiable information that is as objective as possible – is desirable to an ‘under-the-table’ system (to which most students wouldn’t have any access at all).

On a more controversial note: it seems fair to say that some philosophers are better at what they do in their field than others. And while opinions here will differ, there should be some philosophers who can be acknowledged by most to be experts in their fields. Shouldn’t one defer at least to some extent to their opinions about who does good work in their field? So in this sense the very idea of ‘reputational’ surveys doesn’t strike me as obviously unacceptable. The simple fact that there will be difficulties in implementing this in the most objective and fair way doesn’t by itself count against it, I would think.

Meh
Meh
9 years ago

“anyone who could radically manipulate their university’s specialty rank by pulling strings with other members of that speciality can probably get their students jobs by pulling those same strings” – indeed. This is why the PGR succeeds in doing what it purports to do.

wheeler
9 years ago

Here is an example to consider: how to rank vacation destinations. It is ridiculous, but why is it ridiculous? Because you are likely to have a better time on vacation in Chand than in France? Because, all things considered, you are unlikely to have a first-rate vacation in any of the top 5 countries? Of course not. It is ridiculous because it is using one sort of criteria (hotel capacity and other infrastructure indicators) to draw an inference about another (quality of vacation), the validity of which is dubious, and induced rankings obviously distorted, even though there is a positive correlation between the two.

I suppose if I tried harder to convince people to follow my vacation ranking I could solve my validity problems that way. What conditions prevent me from succeeding? Assume that my world was small and relatively closed. How then might I go about pulling it off?