Wednesday’s post on the future of the Philosophical Gourmet Report has a lot of thoughtful comments on it, with some interesting ideas for and alternatives to the PGR. Thanks to those who commented. In this post, I’d like to leave behind discussion of Brian Leiter and focus on the evaluation of the programs. Below the fold are my own thoughts on the matter. Your comments, criticisms, and ideas are, as usual, quite welcome.
(1) Even without something like the PGR, prospective graduate students today have easy access to an enormous amount of information, and can look up all of the relevant publications and citations of all of the relevant faculty at all of the relevant institutions; they can see whose work is discussed on various substantive philosophy blogs; they can email professors quickly and easily all over the world; and then they can also, of course, ask their professors for further assistance. That is really a big change from how things were when the PGR was created. Those who fear a return to the ignorance-fueled halo-effect elitism of old are, I believe, misjudging the situation.
(2) That said, while I think students would be fine, there is further information that they would benefit from that they probably cannot gather on their own: a sense of what “the profession as a whole” thinks (about whose work is especially valuable, and what kinds of philosophical work are important). Now I think there are interesting questions about how we understand “the profession as a whole,” and whether it is possible in any meaningful way to identify what “the profession as a whole” thinks. Nonetheless, the reason something like this is helpful is that it is from “the profession as a whole” that most graduates will be seeking employment, and so it would be helpful for them to have a sense of what “the profession as a whole” thinks about the places they are considering getting their PhD’s from.
(3) I think it would be quite beneficial for us to abandon the language of “ranking”, and replace it with “evaluation” or something like that. This is for a few reasons. First, as many have long pointed out, the fine-grainedness involved in identifying the, say, 12th, 13th, and 14th best graduate programs is at odds with any serious understanding of what philosophy is. It is also at odds with a plausible epistemology for identifying the judgments of the “the profession as a whole.” Second, rankings introduce a new and potentially unhelpful set of incentives for decisions in departments, geared to pleasing the rankers so as to move up a slot or two; additionally the language of ranking is needlessly competitive and antagonistic. Third, and most importantly, is that rankings are not needed to provide the relevant information to prospective students. The kind of information they need is not precise rankings, but general information about what “the profession as a whole” thinks of the departments they’re considering. I would bet that rough classifications into groups (great, good, fair, bad), so we can say “X University is good in Area Y,” would be more than sufficient for this purpose.
(4) The American Philosophical Association should not rank departments or otherwise be involved in the evaluation of departments’ reputations as to philosophical quality (barring unusual circumstances). As I said in a comment on the other thread, it is not clear whether such evaluations fit with the official mission of the APA. But more importantly, though the evaluation of departments might be helpful, such evaluation is fraught with potential side effects that a representative organization like the APA should avoid. As I said in the earlier comment, “an organization that is purportedly there to serve the interests of all members of the profession will find itself instead alienating a good number of those members if it gets into the rankings game.” This is not to say that the APA ought not to collect and make available information relevant to evaluations of programs, as it does. It might even expand such work. UPDATE: Here’s a link to the APA’s own statement about rankings.
(5) The ideal evaluations or set of evaluations should reflect that there are various types of philosophy worth studying. It has long been noticed that the PGR is biased towards LEMming-heavy departments. But there is something strange about a department’s predominant areas of study having an influence on the assessment of the department’s quality. Favoring that kind of influence would be like thinking that one history department is better than another because the first has many people who focus on, say, Africa, while the latter has many people who focus on, say, South America. Additionally, I share the concerns of others that the PGR has overstated the extent to which there is agreement over who is doing good work (a concern that would apply to a single APA-run evaluation, too). I think that what would be most useful, and most reflective of actual opinion, are multiple aggregators of information or evaluations put out by different individuals, groups, or organizations who are transparent in their interests and methods. I also think that a customizable evaluation, as Noelle McAfee suggests here, would be useful. Lastly, we should be on guard against the concentration of power, and a plurality of rankings would help with that too.[This post has been updated to correct several typos.]