The latest issue of Metaphilosophy (October 2015) contains “Appearance and Reality in The Philosophical Gourmet Report: Why the Discrepancy Matters to the Profession of Philosophy” by Brian Bruya (Eastern Michigan). It is a “data-driven critique” of the Philosophical Gourmet Report (PGR) that argues that “the actual value of the PGR, in its current form, is not nearly as high as it is assumed to be and that the PGR is, in fact, detrimental to the profession.” (I had put this in the Heap of Links, but there were some requests from readers for discussion.)
Below is a summary of the criticisms (from pages 678-680). For the arguments in support of these criticisms, see the full paper.
First, there is a selection bias from the beginning in [PGR editor Brian] Leiter’s method of selecting evaluators. Leiter uses no acceptable sampling procedure that could lead to generalizable conclusions beyond the opinions of the evaluators of his survey. In other words, one cannot conclude from the results of the PGR that Notre Dame has the program with the eighteenth-highest reputation of all philosophy programs in the United States, as the PGR purports one can. Instead, one can only conclude that it has the eighteenth-highest reputation among the select group of evaluators that Leiter and his handpicked group of advisers have deemed worthy, which make up a mere one-half of 1 percent of all working philosophers, while systematically excluding all others (except those two hundred unnamed philosophers who were invited but did not participate).
Second, while Leiter seems to suggest that this one-half of 1 percent of philosophers represent the cream of the crop of all philosophers and so are most worthy to undertake such evaluations, in the way he executes his survey, all such experts are mostly working outside their own areas of expertise, and so the rationale of exclusivity, such as it is, crumbles.
Third, the exclusivity is not innocent. There is an unstated assumption (or set of assumptions) driving the selection of evaluators that systematically excludes certain portions of the community of working philosophers. What are those assumptions, and why are they so central to the PGR’s methodology?
Fourth, and a possible answer to the question of what the underlying assumptions are, the selection biases are manifested in the results in the form of undercounting the area of history, resulting in lower scores for programs that have strengths in specialties that Leiter categorizes under the area of history.
Finally, and a further answer to the question of assumptions, the selection bias is also manifested in the results in the form of undercounting the area of ‘other’, which plays a negligible role in the overall ranking and for this reason provides a negative rationale to any program wishing to hire in any specialty of other, and in any specialty not encompassed by the PGR’s list of specialties.
A stark conclusion can be drawn from these five flaws. The PGR is structured to marginalize and/or exclude experts working in specialties that the PGR places under the areas of value, history, and other—82 percent of all specialties according to the APA’s accounting. This practice of marginalization and exclusion begins to affect the profession as soon as any university takes the PGR seriously enough to make personnel decisions in order to affect a program’s ranking. If any school sets out to raise its philosophy program’s ranking in the PGR, it will purposely marginalize specialties in the areas of value and history and outright exclude specialties in the area of other and specialties that do not even make the PGR slate. The more programs do this, the more Ph.D. programs reflect the biases built into the PGR, and as graduates from these programs take jobs at non-Ph.D.-granting colleges and universities, the more the field of philosophy overall begins to resemble the biases implicit in the PGR’s methodology. In this way, the PGR becomes a self-fulfilling prophecy, projecting its own biases about the right way to do philosophy onto the rest of the field, thereby molding the field in its own image.
What about an answer to the second question above: Why are the assumptions that are built into to the PGR’s selection process of evaluators so central to the PGR’s methodology? When we notice that the APA specialties that the PGR would, or does, categorize under other are often associated with feminism and non-Western ethnicities and cultures, one cannot help but wonder whether the PGR’s hidden biases are based in sexism, racism, ethnocentrism, and xenophobia.
It is often recommended by defenders of the PGR that the PGR rankings be taken with a grain of salt, but because of its status in the profession, which actually does take account of this flawed instrument in making such important decisions as hiring, the PGR is having an unwarranted and negative effect on the profession. The harm can be seen most saliently in the way that non-Western philosophy is treated. Despite the growth in multiculturalism across all levels of education and despite calls for diversity and globalization in all corners of academia (Bruya 2015), any philosophy Ph.D. program that considers hiring in any branch of non-Western philosophy, and that strives to achieve or maintain a high rank in the PGR, need only look at the above biases in the PGR to be convinced that it would be an infinitely bad idea to make such a hire. That post could be used instead to hire in an area that would have an impact on a program’s rank. For instance, even if a program hired the most distinguished scholar working in Indian philosophy, the likelihood of the general slate of PGR evaluators recognizing this person’s name for the overall ranking would be little to none. Thus, this person, prominent in her or his own field, would do nothing to raise the program’s rankings overall and would instead waste a slot that could be filled by someone who could raise the program’s rank. This is why the PGR is having a deleterious impact on the profession through its deep-seated methods of exclusivity and why it is worth being examined in detail in this article.
Professor Bruya makes some suggestions for improving the PGR. They include:
- use a random sample for the evaluator pool
- use mathematically aggregated specialty scores to calculate overall ranking
- allow evaluators to evaluate only one specialty
- revise the list of specialties
- offer a special score to indicate comprehensive balance within programs
The whole article is worth a read. Don’t miss the disclaimer Professor Bruya recommends be added to the PGR until it is revised, at the end of the essay.
It will be interesting to see whether, and if so, how, the PGR’s methodology will change under the new editorship of Berit Brogaard (Miami).
(image: detail of Frank Stella, “Nunca Pasa Nada”)
Note: Please keep comments focused on the topic, and not on people’s personalities. Additionally, I’d urge readers to look at the article by Professor Bruya, to read his arguments for, and elaborations of, his critiques and suggestions.
UPDATE (12/15/15): Brian Leiter (Chicago), creator and former editor of the PGR, has responded to Bruya at his blog. He begins his response by claiming that Bruya’s critique of the PGR is motivated by self-interest (“he plainly feels he and his friends in the profession are undervalued because of the PGR”). It is not clear what evidence Leiter has for this claim. Nor does Leiter report on whether his response to Bruya is similarly motivated by self-interest.
Leiter concludes with two predictions. First, he claims that there will be no changes to the PGR as a result of Bruya’s critique (or, one would assume, various similar critiques, for example, here). Second, he claims that Metaphilosophy will withdraw Bruya’s article. I’d take those bets. (Unless, of course, we have been misled about the changed leadership of the PGR.)
UPDATE 2 (12/15/15): Bruya responds to Leiter here.