A Detailed Critique of the Philosophical Gourmet Report (updated)

A Detailed Critique of the Philosophical Gourmet Report (updated)


The latest issue of Metaphilosophy (October 2015) contains “Appearance and Reality in The Philosophical Gourmet Report: Why the Discrepancy Matters to the Profession of Philosophy” by Brian Bruya (Eastern Michigan). It is a “data-driven critique” of the Philosophical Gourmet Report (PGR) that argues that “the actual value of the PGR, in its current form, is not nearly as high as it is assumed to be and that the PGR is, in fact, detrimental to the profession.” (I had put this in the Heap of Links, but there were some requests from readers for discussion.)

Below is a summary of the criticisms (from pages 678-680). For the arguments in support of these criticisms, see the full paper.

First, there is a selection bias from the beginning in [PGR editor Brian] Leiter’s method of selecting evaluators. Leiter uses no acceptable sampling procedure that could lead to generalizable conclusions beyond the opinions of the evaluators of his survey. In other words, one cannot conclude from the results of the PGR that Notre Dame has the program with the eighteenth-highest reputation of all philosophy programs in the United States, as the PGR purports one can. Instead, one can only conclude that it has the eighteenth-highest reputation among the select group of evaluators that Leiter and his handpicked group of advisers have deemed worthy, which make up a mere one-half of 1 percent of all working philosophers, while systematically excluding all others (except those two hundred unnamed philosophers who were invited but did not participate).

Second, while Leiter seems to suggest that this one-half of 1 percent of philosophers represent the cream of the crop of all philosophers and so are most worthy to undertake such evaluations, in the way he executes his survey, all such experts are mostly working outside their own areas of expertise, and so the rationale of exclusivity, such as it is, crumbles.

Third, the exclusivity is not innocent. There is an unstated assumption (or set of assumptions) driving the selection of evaluators that systematically excludes certain portions of the community of working philosophers. What are those assumptions, and why are they so central to the PGR’s methodology?

Fourth, and a possible answer to the question of what the underlying assumptions are, the selection biases are manifested in the results in the form of undercounting the area of history, resulting in lower scores for programs that have strengths in specialties that Leiter categorizes under the area of history.

Finally, and a further answer to the question of assumptions, the selection bias is also manifested in the results in the form of undercounting the area of ‘other’, which plays a negligible role in the overall ranking and for this reason provides a negative rationale to any program wishing to hire in any specialty of other, and in any specialty not encompassed by the PGR’s list of specialties.

A stark conclusion can be drawn from these five flaws. The PGR is structured to marginalize and/or exclude experts working in specialties that the PGR places under the areas of value, history, and other—82 percent of all specialties according to the APA’s accounting. This practice of marginalization and exclusion begins to affect the profession as soon as any university takes the PGR seriously enough to make personnel decisions in order to affect a program’s ranking. If any school sets out to raise its philosophy program’s ranking in the PGR, it will purposely marginalize specialties in the areas of value and history and outright exclude specialties in the area of other and specialties that do not even make the PGR slate. The more programs do this, the more Ph.D. programs reflect the biases built into the PGR, and as graduates from these programs take jobs at non-Ph.D.-granting colleges and universities, the more the field of philosophy overall begins to resemble the biases implicit in the PGR’s methodology. In this way, the PGR becomes a self-fulfilling prophecy, projecting its own biases about the right way to do philosophy onto the rest of the field, thereby molding the field in its own image.

What about an answer to the second question above: Why are the assumptions that are built into to the PGR’s selection process of evaluators so central to the PGR’s methodology? When we notice that the APA specialties that the PGR would, or does, categorize under other are often associated with feminism and non-Western ethnicities and cultures, one cannot help but wonder whether the PGR’s hidden biases are based in sexism, racism, ethnocentrism, and xenophobia.

It is often recommended by defenders of the PGR that the PGR rankings be taken with a grain of salt, but because of its status in the profession, which actually does take account of this flawed instrument in making such important decisions as hiring, the PGR is having an unwarranted and negative effect on the profession. The harm can be seen most saliently in the way that non-Western philosophy is treated. Despite the growth in multiculturalism across all levels of education and despite calls for diversity and globalization in all corners of academia (Bruya 2015), any philosophy Ph.D. program that considers hiring in any branch of non-Western philosophy, and that strives to achieve or maintain a high rank in the PGR, need only look at the above biases in the PGR to be convinced that it would be an infinitely bad idea to make such a hire. That post could be used instead to hire in an area that would have an impact on a program’s rank. For instance, even if a program hired the most distinguished scholar working in Indian philosophy, the likelihood of the general slate of PGR evaluators recognizing this person’s name for the overall ranking would be little to none. Thus, this person, prominent in her or his own field, would do nothing to raise the program’s rankings overall and would instead waste a slot that could be filled by someone who could raise the program’s rank. This is why the PGR is having a deleterious impact on the profession through its deep-seated methods of exclusivity and why it is worth being examined in detail in this article.

Professor Bruya makes some suggestions for improving the PGR. They include:

  • use a random sample for the evaluator pool
  • use mathematically aggregated specialty scores to calculate overall ranking
  • allow evaluators to evaluate only one specialty
  • revise the list of specialties
  • offer a special score to indicate comprehensive balance within programs

The whole article is worth a read. Don’t miss the disclaimer Professor Bruya recommends be added to the PGR until it is revised, at the end of the essay.

It will be interesting to see whether, and if so, how, the PGR’s methodology will change under the new editorship of Berit Brogaard (Miami).

(image: detail of Frank Stella, “Nunca Pasa Nada”)

Stella

Note: Please keep comments focused on the topic, and not on people’s personalities. Additionally, I’d urge readers to look at the article by Professor Bruya, to read his arguments for, and elaborations of, his critiques and suggestions.

UPDATE (12/15/15): Brian Leiter (Chicago), creator and former editor of the PGR, has responded to Bruya at his blog. He begins his response by claiming that Bruya’s critique of the PGR is motivated by self-interest (“he plainly feels he and his friends in the profession are undervalued because of the PGR”). It is not clear what evidence Leiter has for this claim. Nor does Leiter report on whether his response to Bruya is similarly motivated by self-interest.

Leiter concludes with two predictions. First, he claims that there will be no changes to the PGR as a result of Bruya’s critique (or, one would assume, various similar critiques, for example, here). Second, he claims that Metaphilosophy will withdraw Bruya’s article. I’d take those bets. (Unless, of course, we have been misled about the changed leadership of the PGR.)

UPDATE 2 (12/15/15): Bruya responds to Leiter here.

guest
50 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Enzo Rossi
5 years ago

Some philosophy jobs are generally more desirable for non-philosophical reasons: location, salary, resources, and so on. The PGR’s non-random sampling and some of its other biases are responses to that fact. I don’t think the PGR was ever meant to track philosophical excellence. It’s a heuristic to predict a PhD graduate’s chances of ending up in a philosophy job that’s relatively desirable from a non-philosophical point of view. Insert ceteribus paribus clauses at the relevant points if you must. And yes, this is a hyperbole.Report

Tim Kenyon
Tim Kenyon
Reply to  Enzo Rossi
5 years ago

I don’t see much grounds to accept any of these claims, though. The very first sentence, taken entirely in isolation, might be true, but not obviously in a way that reflects anything the PGR does. Non-philosophical factors, including salary and desirability of location, don’t particularly travel together (the latter, moreover, being highly subjective); it’s hard to see much evidence that the PGR’s biases are responsive to those heterogeneous factors in any case. Plenty of allegedly philosophically desirable jobs are in damned awful places, while others are in nicer places that are unliveably expensive. And lots of absolutely lovely places to live have departments that are unfamiliar or of little interest to PGR evaluators by and large.

Nor is it clear to whom you’re referring when you write “I don’t think the PGR was ever meant to track philosophical excellence.” It has always been trumpeted as tracking philosophical excellence by many, and I assume they mean to do so. If every allusion to the PGR, including “savvy insider” gossip at conferences, brag notes in departmental newsletters and on program websites, and emails to Deans, had to include the reminder that the PGR is *not an indicator of philosophical excellence*, but is just a heuristic intended to predict graduate job placement in jobs of a class that’s hard to specify without circularity, but in any case without presupposing any good academic reasons for that placement record… then the whole enterprise would never have taken off. Not just because it would be awkward to say all that stuff (though it would be!), but because it would make so many of the desired uses of the PGR instantly self-undermining.

In other words, I don’t think it will fly to say that the PGR was never meant to track philosophical excellence, or quality of graduate education, now that people are paying more attention to the fact that its methodology is disastrously ill-suited to do so. I think the belief that it indicates those things is precisely why it’s been enfranchised by senior and prominent people in the discipline, whose seniority and prominence it has fostered and consolidated in turn, and why participation in and chatter about the PGR has become normalized and entrenched in a subsequent academic generation or two as well. I’m speaking of myself and of the discipline generally when I say that it is socio-epistemically very difficult to admit, not least to oneself, that one has been unreflectively (that is, in some sense *unwillingly*, but also *uncarefully*) socialized into both buying and selling snake oil; so it’s fair to expect that the process of our discipline’s disengaging itself from the PGR will involve both time and some politely silent strategic forgetting all around. Fortunately, as a friend formerly addicted to fantasy baseball pools observed, the sense that there’s something important there to care about evaporates pretty quickly once you stop paying attention to the sub-group that hypes it. Some restraint in what that index finger does with the mouse button might be an important first step towards quietly mitigating this distorting influence on the discipline. Though of course few things could be more effective than people’s just ceasing to participate as evaluators.Report

E
E
5 years ago

This confirms what everybody probably already assumed: a ‘randomly’ selected group of elite experts will recommend other elite experts.Report

Anon Prof
Anon Prof
Reply to  E
5 years ago

Right . . . but is that necessarily a bad thing? It may suck (it does!) that those people have all the power in the profession, but *given* that they do, isn’t it good for students to know where they are, since it is in their interests to be taught by the people with the professional power, who can get them jobs?Report

Matt Weiner
Matt Weiner
Reply to  Anon Prof
5 years ago

The best indication of which programs will help students get jobs seems like it’d be detailed and transparent information about job placement in programs. This is not only more relevant than measures of “faculty quality” to the question of “Can this program help me get a job?”, it’s also more objective and easier to measure and compare.Report

JDRox
JDRox
5 years ago

A critique of the critiques:

1) Yes, the jury pool is *supposed to* be biased in favor of experts.
2) This depends on the kind of expertise. If the experts are experts in which philosophy departments are best, then this doesn’t follow. And certainly some people have more expertise in which philosophy departments are best. I know, for example, that many people have more expertise than I in this matter.
3) Yes, there are some assumptions. But this isn’t a critique! The actual critique seems to be that the PGR doesn’t value feminist philosophy etc. as highly as it should. But…Objectively should? Wow, that’s hard to evaluate. Subjectively should? Well, evidently not, in the eyes of the experts, if the jury pool has bees selected correctly.
4) This depends on how heavily history *should* be weighed. For that, see above. In any case, by my lights, around 2010, a good strategy (a strategy that was actually used for good effect) for shooting up in the overall rankings was to hire a bunch of history people.
5) Again, this is only a critique if these other areas *should* be weighed higher. See above.

A critique of the suggestions:

1) A random pool would be worse than Leiter’s hand-picked pool. Note that this is true even if Leiter’s picks have only, say, a 10% chance of actually have above average expertise. (Yes, there are some plausible assumptions being made in this comment.)
2) How will these scores be mathematically aggregated? The function one uses will embody the kind of “political” assumptions Bruya seems worried about. And “math” isn’t going to deliver some sort of objective answer: If they’re summed, the process will massively privilege breath. If they’re averaged, it’ll massively privilege depth. And so on.
3) I guess we could do this, but it’s hard to see the rational since many people have expertise in more than one specialty.
4) As with anything, I’m sure the list of specialities isn’t perfect, so I have no objection to this in principle.
5) The way in which this score is computed will be subject to the same sort of politics/worries as the function used in (2).Report

WP
WP
Reply to  JDRox
5 years ago

And it just so happens that almost half of “the experts” come from Princeton, Rutgers, Harvard, Pitt, Cornell, UM, MIT, and Yale?Report

JDRox
JDRox
Reply to  WP
5 years ago

That obviously didn’t happen by chance. But what does that have to do with anything?Report

WP
WP
Reply to  JDRox
5 years ago

Seems like reason to think that the selection bias is not just “bias in favor of experts.”Report

Yet Another Anon Grad Student
Yet Another Anon Grad Student
5 years ago

The PGR does kinda suck. Why? Because it is a survey run by a private individual, and is thus limited by the amount of personal time and energy that this individual can expend on it. The evaluators are “selected” by Leiter sending people emails. Why is anyone surprised by the fact that only certain people bother to respond to an email by a private individual asking them to help him out with his survey, and that these people would primarily be his friends that run in the same academic circles as him? The fact that someone actually published an article critiquing the PGR is rather surreal. As though it were some study published in a journal that was funded by a research grant and backed by a university. It speaks to the total antipathy of the field toward any sort of evaluation or ordering of departments. If you don’t like the PGR, the solution is very, *very* simple: come up with a better ranking.

How does one do this? Well, first you decide what it is that you want your ranking to track. Let’s say its graduate placement. Then you look at the data and find relevant correlations. Do grad departments with more history and value theory professors place more grad students than those that do not? What is the correlation between mean number of faculty publications and grad placement. Etc. Then you use those to construct a ranking metric that is predictive of grad placement. This isn’t rocket science.

The APA could do this. But I think we all know that isn’t going to happen, for obvious sociological reasons.Report

WP
WP
Reply to  Yet Another Anon Grad Student
5 years ago

For the same reasons, no one should ever publish an article merely criticizing a theory. If you don’t like a theory, the solution is very, *very* simple: come up with a better one.Report

Yet Another Anon Grad Student
Yet Another Anon Grad Student
Reply to  WP
5 years ago

Well, the reasons were that this is something unofficial done by private individuals that isn’t backed by universities or published in journals. So the analogy you’re looking for is: no one should ever publish an article merely criticizing a theory put forth on a blog. If you don’t like it, come up with a better one by actually doing rigorous research. I completely agree.Report

WP
WP
Reply to  Yet Another Anon Grad Student
5 years ago

What do you think is the relevant difference between a blog post and journal article? I would guess the likelihood of being influential. We know the PGR has a significant effect on the discipline. Students make grad school decisions based on it. It influences hiring decisions. It’s what departments use to convince their deanlets that they should get more funding.

If someone self published a paper online that became significantly influential, of course it would be appropriate to write a journal article responding to it.Report

Yet Another Anon Grad Student
Yet Another Anon Grad Student
Reply to  WP
5 years ago

(1) No, actually the relevant difference is that published articles have been subjected to rigorous peer review. Since we are discussing statistics, this would be equivalent to an economist publishing an article in a journal criticizing a survey done by some person and his friends that was then published in the NY Times. Economists would never do this because it is an obvious waste of time and the person’s results won’t pose a challenge to their developed theories. All they would need to do is cite their actual research in a letter to the editor, and this would demolish the survey. Of course, none of this is true here because the people doing the criticism obstinately refuse to actually conduct any real rigorous research into how departments rank in grad placement.

(2) Given the horrendously low citation rates in philosophy, virtually every blog post from major blogs is going to be orders of magnitude more influential than the majority of papers published in journals. So by your line of reasoning we should only be citing blog posts. In particular, your line of reasoning suggests that for an argument about X, we should write a journal article responding to the most influential blog post about X rather than to the likely far better arguments presented for X in the peer reviewed literature.Report

WP
WP
Reply to  WP
5 years ago

Why would journal articles only be appropriate in response to things that have been subjected to rigorous peer review? So much for all the articles on books…Report

Yet Another Anon Grad Student
Yet Another Anon Grad Student
Reply to  WP
5 years ago

Books published by a respected academic publisher, while not quite as good as peer reviewed journal articles, are still vetted and still represent the mature thought of an expert. A blog post, on the other hand, does not represent the mature view of an author and is not vetted at all. Citing a blog post or an anecdote from a conversation is one thing. Writing an entire article criticizing something someone said informally is quite another.

Another oddity of this is that journals are supposed to be a repository for the collective knowledge of the field, not mediums in which we debate bureaucratic and social policy of the field itself. If you want to affect the field at a social level you are better off just fighting fire with fire and posting your critique at another blog or bringing it up at an APA meeting. Look at what’s happening now. The article only has an effect by being discussed in social media.

But all of this is totally beside the point I was originally making, which was that someone bothered to publish an article criticizing an unofficial survey while, as far as I’m aware, no one has put serious effort into researching how departments actually place their graduates. There have been a few informal attempts here and there. But they haven’t amounted to much and are typically politically loaded. Just as the PGR’s reputational ranking system is politically self-serving for those involved, virtually any informal attempt will be. The people running it will invariably choose the methods that will get them the result they want (e.g., by counting placement at a community college the same as placement at an R1 university). That is why a collective effort is necessary. Preferably it should be done by hiring statisticians outside the field who have no stake in the results.Report

WP
WP
5 years ago

I’m honestly shocked that the PGR uses snowball sampling. It’s a decent method of identifying experts for qualitative research. For this kind of quantitative research, though—yikes!

If people tend to rate the work of people they know more highly (and it seems like there’s every reason to expect that), using snowball sampling will result in evaluators’ institutions being systematically overvalued. And, indeed, there’s a significant correlation between the number of evaluators that come from a school and it’s ranking. I don’t get why Professor Leiter just dismisses this.Report

Jamie Dreier
Jamie Dreier
Reply to  WP
5 years ago

Maybe the reason for the significant correlation is that the higher-ranked departments have on average more expert philosophers, and Brian is trying to get experts to answer the questionnaires.Report

WP
WP
Reply to  Jamie Dreier
5 years ago

The expertise needed to evaluate other philosopher’s work—at least in the way relevant to rankings—doesn’t really seem like such a specialized skill. I would think that basically every active researcher in a “top 50” department is qualified to make that kind of evaluation. Isn’t that what we normally think—that you don’t have to be a truly exceptional philosopher to make a syllabus that gets at the best work in an area, or to be a qualified peer reviewer?Report

David Wallace
David Wallace
Reply to  WP
5 years ago

Just to repeat from the other thread: the correlation is not in fact between institutional PGR rank and number of evaluators at the institution. (That correlation is extremely weak.) It’s between institutional PGR rank and number of evaluators with a PhD from the institution. That’s only to be expected if placement record tracks faculty quality and faculty quality isn’t too wildly varying in time.Report

babygirl
babygirl
Reply to  WP
5 years ago

I also think this is a weird criticism … Of course the people who are extremely active researchers (you know, the people who generally tend to show up at top-tier research universities) would have a better idea about the current state of research in their particular fields. I imagine a more random sampling would protect the status quo *even more* than such snowball sampling, since there would be many people in the random sample who, for whatever reason, aren’t very active in their research.

I agree that it’s an imperfect system, but of course, no perfect system is being proposed. Like many people here, the PGR was very helpful to me as I applied to grad school, and researched departments which are excellent in my subfield. Is there room for improvement? Yes. But these remarks seem strangely personal and hyperbolic, more like a rant than helpful suggestions, and they also seem to miss the mark.Report

WP
WP
Reply to  babygirl
5 years ago

Would you really expect someone at Rutgers to have a far better idea of the state of their field than someone at OSU or Brown? It’s not like we’re comparing people at top research institutions to people at teaching oriented liberal arts schools. Isn’t every PGR ranked department filled with active researchers?

Like I said, there’s every reason to expect the sampling method to result in bias toward schools with more evaluators, and the results don’t exactly ease that concern. r = .78!

Not sure what seems strangely personal and hyperbolic about this.Report

Alexus McLeod
Reply to  WP
5 years ago

To reply to the last two comments–
The question of how one is deemed an expert seems to me a pretty relevant one. There are clearly passed-over evaluators on some sections of the PGR. In my own area of Chinese Philosophy this is clearly and egregiously the case. The evaluators chosen were indeed experts, but nowhere near a fair representation of all of the top experts in the area. The list would look very different if you polled all those in Chinese Philosophy and asked us who the experts were, for example–it would likely include those Leiter selected, but would be more diverse. And in areas like my own, most of the experts (those who I would think of as the experts, for example), are not at the kinds of institution Leiter thinks of as the best. The extremely active researchers in my area are not always (or even often, really) in the highest PGR ranked departments, even on the Chinese Philosophy list! So the “that’s where the experts are” case doesn’t work particularly well here, at least. And if this is going on in Chinese Philosophy, I suspect the same thing is going on in other areas. There seems some circularity involved here (on Leiter’s part). The higher-ranked places are where the experts are, but those who are perceived as experts are so perceived *because* they’re at the higher-ranked places. Why not allow for a poll of all of the people in a given area to decide who the relevant experts are? Why not think that those employed in a certain area at least have the ability to determine who can be considered experts in their field? I can’t think of any reason for resisting this other than mistrust of the acumen of the philosophical hoi polloi. But if that’s what’s going on, then there is clearly bias built into the PGR.Report

postdoc
postdoc
5 years ago

I’d say this article is very well done and shows that the PGR needs to be significantly overhauled. Leiter may try to dismiss it, but I doubt Brogaard will.Report

postdoc
postdoc
5 years ago

None of the broader points would be particularly effected as far as I can tell, but Leiter is right that the author incorrectly states that the PGR puts 15 programs under M&E. This should be corrected.Report

Christopher Gauker
Christopher Gauker
5 years ago

The article righly emphasizes (p. 664) that the PGR reviewers are not in fact experts in the relevant sense. They have very little first-hand acquaintance with the work of the people in the departments they are ranking. They are simply not competent to make the judgments they are making. The article also cites the usual response to this, namely, that the aggregation of partially informed evaluators can produce a reliable ranking (note 11). However, the article fails to mention an obvious flaw in this answer, namely, that the aggregation does nothing to filter out the effect of shared biases. It is easy to think of shared biases that might have a big effect, for instance, a bias towards departments that have been strong in the past, a bias toward departments in universities that are strong overall, and a bias toward departments that have been ranked highly in the PGR.Report

AnonEMouse
AnonEMouse
Reply to  Christopher Gauker
5 years ago

I don’t understand why this disconnect you note matters. Happy to assume you’re right that the PGR reviewers faux experts.
But aren’t these allegedly faux experts the very same people who will be looking at candidates to determine whether they should get a job? So–the argument goes–when they look at candidates’ CVs and letters, they are going to be employing their allegedly faux expertise in a substantially similar way to when they are doing their PGR reviewing. The PGR gives aspiring graduate students insight into that information–namely, how you are likely to be perceived, if you went to a particular school and work with the philosophers there.Report

Christopher Gauker
Christopher Gauker
Reply to  AnonEMouse
5 years ago

To AnonEMouse: When a hiring committee reviews a job application, they read the letters of recommendation, and if the application gets through the first stage, the writing sample. If the application gets beyond that, there are interviews off and on campus. They get to know the work of the finalists pretty well. Please look at the list of faculty at any graduate program in philosophy other than the ones you have belonged to. How many of the names do you recognize? How many have written something you have actually read? Don’t just imagine. Please actually look at the lists (as I did when I was a PGR reviewer). In the best cases, the reviewer may have read something or heard a talk by a few of them and will know well the work of two or three. Some more conscientious reviewers may go to the department websites to look at the publication records. You are vastly overestimating the breadth and depth of reading that even the most active members of the profession have taken on.Report

AnonEMouse
AnonEMouse
Reply to  Christopher Gauker
5 years ago

I wasn’t suggesting that hiring is some simple exercise; I get that by the end of the hiring process, the hiring department has a very good understanding of its candidate. But about the “first stage”–it seems like there, this kind of pedigree analysis figures in greatly. No?Report

jdkbrown
jdkbrown
Reply to  AnonEMouse
5 years ago

“But aren’t these allegedly faux experts the very same people who will be looking at candidates to determine whether they should get a job?”

Well, no, since the vast majority of hiring occurs in departments with no PGR evaluators on faculty.Report

AnonEMouse
AnonEMouse
Reply to  jdkbrown
5 years ago

Point well taken, but I was talking about at these purportedly “desirable departments”–where so many PGR reviewers are plucked from.
I don’t believe that these jobs are all that desirable, but I think the PGR is aimed at an audience of people who want those jobs. Disagree?Report

Thomas
Thomas
5 years ago

As a perspective graduate student, I am and always have been BAFFLED as to how anyone could think that the PGR is not a useful device for myself and others like me. Is it perfect? No, likely not — but general elections aren’t either. Is it everything? No — I used it sensibly alongside my own research and the opinions of faculty I know and trust (as well as bearing in mind the explicit provisos re. its own limitations ). But why anyone would think the average analytic philosopher (aspiring) would be better off WITHOUT it completely…? Well, again, BAFFLING!Report

Applying
Applying
Reply to  Thomas
5 years ago

I’m inclined to agree – it’s in no way ideal, but for us aspiring graduate students, better than nothing at all. I have yet to find any similar resource that is as informative (the APA’s guide, while certainly beneficial, seems to only contain mere blurbs about each department’s strengths).Report

Amy Olberding
Amy Olberding
5 years ago

Professor Leiter’s presentation of Bruya himself bothers me a great deal. E.g., unlike others Leiter cites in his blog postings, Bruya is not characterized as a philosopher but someone who “teaches philosophy.” His pedigree, from University of Hawai’i, and his specialty, Chinese philosophy, are invoked as evidence that he is motivated by a desire to accrue power or, I suppose, protest his powerlessness in the PGR-preoccupied part of the profession: Whatever happens to PGR, Bruya self-interestedly just wants Chinese philosophy to “count for more!” After all, given his sorry background, training, and area, we could hardly expect any more sophisticated motivation.

All I say here is of course immediately discredited by the fact that I too am a UH grad specializing in Chinese philosophy. Indeed, Brian Bruya is one of my grad school peers. Still, as another power-seeking, marginal “teacher of philosophy” and unpedigreed mutt living on a steady diet of sour grapes, let me just say that it is no credit to those who continue to defend the PGR that attacks on a peer-reviewed critique of it must resort to schoolyard tactics and rhetorical moves. However predictable this style of response may have been, I dislike leaving this sort of rhetoric unremarked, for it simply reinforces boundaries that aren’t especially helpful for the profession and doesn’t just diminish Bruya, but all of us.Report

AnonEMouse
AnonEMouse
Reply to  Amy Olberding
5 years ago

Curious to know how you feel about this gem:
When we notice that the APA specialties that the PGR would, or does, categorize under other are often associated with feminism and non-Western ethnicities and cultures, one cannot help but wonder whether the PGR’s hidden biases are based in sexism, racism, ethnocentrism, and xenophobia.Report

David Wallace
David Wallace
Reply to  AnonEMouse
5 years ago

That is a pretty disturbing statement to find in a peer-reviewed and published article.Report

Anon1
Anon1
Reply to  David Wallace
5 years ago

Really? I find it to be a very relevant statement given the analysis (and the groupings made by the leaders of the PGR) and something I have wondered myself (“wondered” being the key word). Hope I am not sued for defamation.Report

David Wallace
David Wallace
Reply to  Anon1
5 years ago

If you want to wonder it on a blog post, go for it. If you want to do so in the peer-reviewed academic literature, bring evidence.Report

Anon
Anon
Reply to  David Wallace
5 years ago

What I find disturbing is that anyone is disturbed by this statement. Philosophy’s problems with sexism, racism, etc. are patently obvious. Are we not allowed to bring them up in published work?Report

David Wallace
David Wallace
Reply to  Anon
5 years ago

If you can argue for it, adduce evidence for the specific claim, and cite appropriate peer-reviewed literature, you’re welcome to bring it up.Report

Tim Kenyon
Tim Kenyon
Reply to  Anon
5 years ago

Registering that one is wondering something does not generally require evidence or proof. Right now, believe me, I’m wondering a lot of things. “Prove it!” would not be a response reflecting care with the notion of evidence (or of wondering). “The specific claim” in the quote is that one cannot help wondering something. Is the complaint that the author did not adduce peer-reviewed evidence for the impossibility of his resisting the wondering urge?

Maybe the complaint is that the statement covertly means, not that it’s natural to *wonder whether* sexism, racism, etc., underlie the grouping together of feminist, philosophy of race, etc., but rather that sexism, racism, etc., *definitely do* underlie the grouping together of feminist, philosophy of race, etc. One could at least argue that. It’s really not what the specific statement says on its face, but maybe the wider context from which it is quoted — rather than the specific quote itself — makes clear that it has this implicature. One would have to adduce evidence for that interpretation, though.

Or maybe the idea is that, in general, one should not merely wonder about things in published papers; that it’s methodologically unacceptable to say things like, “We have seen that X, which naturally makes one wonder whether Y,” excepting that one then goes on in that very paper to prove (in whatever sense is appropriate to the context) either that X does or that it does not suffice for Y. This would be a remarkably powerful principle, the rationale for which is entirely obscure; and one that, on my recollection of philosophical and empirical literatures alike, is commonly violated.Report

David Wallace
David Wallace
Reply to  Anon
5 years ago

Very strong accusations against individuals or groups are not somehow insulated from criticism by wrapping them in a “one cannot help but wonder that” operator.Report

Tim Kenyon
Tim Kenyon
Reply to  Anon
5 years ago

Of course I agree. Do you think that’s what anyone has done here, though? For one thing, your interlocutors seem to have said exactly the opposite: that observing the operations of sexist or racist processes in academic philosophy is hardly a “very strong accusation,” but a commonplace. Many of my own actions and thoughts reflect racism and sexism; admitting this is about as controversial as wearing socks. But the first question is surely whether there is an argument available for the view that, notwithstanding the plain language used, the author is *not* actually noting that the phenomena raise a prospect worth wondering about?Report

WP
WP
Reply to  Anon
5 years ago

It doesn’t seem like a very strong claim to me either. It seems obviously true that our “canon” is ethnocentric. What’s the alternative explanation—that it just so happened that European philosophers produced a great deal of quality work and Indian, Islamic, African, and Latin American philosophers produced none? And it seems equally clear that the reason philosophy of race and feminist philosophy continue to be marginalized subfields is an effect of sexism and racism and not that there’s not a great deal of interesting, important work to be done there.

The PGR of course didn’t come up with this way of thinking about the discipline on its own, but it seems hard to dispute that it’s the result of ethnocentrism, racism, and sexism—and so we should be concerned if the PGR is reinforcing it.Report

Mitchell Aboulafia
5 years ago

Bruya points out in his in footnote 4 that “This critique of the PGR was written when the 2011 edition was the most current edition. The 2014–2015 edition came out when this article was under review.”
As many people know, much has happened since the 2011 edition. Many critiques of the 2014-2015 version of the PGR, which has its own unique problems, e.g., the loss of evaluators in the specializations, are treated under links in the relevant section of “A User’s Guide To Philosophy Without Rankings”: philosophyrankings.com/2015/08/19/the-philosophical-gourmet-report/ .Report

A non non
A non non
5 years ago

Are you still taking bets? I just got a check for $7 from RadioShack, and after looking at David Wallace’s analysis, I’m looking to make it $14.Report

AnonGradStudent
AnonGradStudent
5 years ago

I am a current grad student, and I certainly did consult the PGR when I was applying to graduate schools. I ended up at a school that is a very good fit for me, with a strong placement record that is much better than its PGR rank would suggest. My worry with the PGR is that it solidifies opinions about schools–which are the “good” ones–on the basis of factors that may or may not be related to what a student is concerned with. It solidifies in the public mentality what sub-fields are best, who the experts are, etc, on the basis of Leiter-selected experts. There is good evidence we ALL are biased, and so to have this much effect on ratings by one set of biases is problematic. This, to my mind, is a pernicious effect on the discipline as a whole, and leads to under-valuing of fields outside of fields that have a dominant effect on the ratings. Additionally, I doubt even the most active researcher has the time to keep with with fields she does not publish or teach in, and thus having overall ratings effectively amounts to schools being rated well for simply having famous people. For instance, I don’t work in early modern philosophy, for instance, but I know some of the big names, and I expect those would impress me more than names I don’t know, even if I am equally unfamiliar with their work. I might be unique, but I doubt it.Report

Anon Grad Student
Anon Grad Student
5 years ago

The idea that the best way to find out who the experts in a profession are is by polling a representative sample of members of the profession seems very bizarre to me. Sure, if you start off from a blank slate—with absolutely no knowledge of who the experts are—it’s probably the best way to proceed, but neither Leiter, nor anyone else actually attempting the same project, starts from this position: we all know of quite a few people who are certainly experts, and we have reason to give more weight to their opinions about who the other experts are than we do to the opinions of the median member of the profession. This naturally leads to snowball sampling.

Of course, it means that some bias will be introduced, if there are biases that are more common among the starting group than among the profession as a whole. And there will be—but there will also, presumably, be biases that the starting group are *less* susceptible to than the profession as a whole. And we have no good reason to think that the net impact of these biases on accuracy will be worse in our starting group and in the groups generated from it than in the profession as a whole, whereas we do have good reason to think that the positive impact of expertise on accuracy will be better in the starting group and its daughter groups than in the profession as a whole.Report

Gina
Gina
5 years ago

Aside from hiring decisions, the only people other group I can see that would consult the Gourmet would be graduate students. I can’t see how most universities (at least in Australia) see a high Gourmet ranking as all that relevant to their financial interests. I think what we need to move towards is some kind of ranking system that focuses on the quality of graduate student programs, ie how much the university invests resources in grad programs, something as simple as the number of staff at the university, harassment issues and so on. Graduate students, aside from the professionals marginalized by the ranking, are the group most hurt by the persistent use of Gourment rankings. We also face a century where both analytic and continental philosophy appear to be running out of steam (some people have posted reports about the analytic turn in Europe, for example; analytic philosophy appears to be moving towards hyper-specialization, phil of a scientific discipline), so, this kind of bias will also stifle the well-spring of creativity and innovation that the next few generations of grad students represent.Report