Evaluating Philosophy Graduate Programs


Wednesday’s post on the future of the Philosophical Gourmet Report has a lot of thoughtful comments on it, with some interesting ideas for and alternatives to the PGR. Thanks to those who commented. In this post, I’d like to leave behind discussion of Brian Leiter and focus on the evaluation of the programs. Below the fold are my own thoughts on the matter. Your comments, criticisms, and ideas are, as usual, quite welcome.


(1) Even without something like the PGR, prospective graduate students today have easy access to an enormous amount of information, and can look up all of the relevant publications and citations of all of the relevant faculty at all of the relevant institutions; they can see whose work is discussed on various substantive philosophy blogs; they can email professors quickly and easily all over the world; and then they can also, of course, ask their professors for further assistance. That is really a big change from how things were when the PGR was created. Those who fear a return to the ignorance-fueled halo-effect elitism of old are, I believe, misjudging the situation.

(2) That said, while I think students would be fine, there is further information that they would benefit from that they probably cannot gather on their own: a sense of what “the profession as a whole” thinks (about whose work is especially valuable, and what kinds of philosophical work are important). Now I think there are interesting questions about how we understand “the profession as a whole,” and whether it is possible in any meaningful way to identify what “the profession as a whole” thinks. Nonetheless, the reason something like this is helpful is that it is from “the profession as a whole” that most graduates will be seeking employment, and so it would be helpful for them to have a sense of what “the profession as a whole” thinks about the places they are considering getting their PhD’s from.

(3) I think it would be quite beneficial for us to abandon the language of “ranking”, and replace it with “evaluation” or something like that. This is for a few reasons. First, as many have long pointed out, the fine-grainedness involved in identifying the, say, 12th, 13th, and 14th best graduate programs is at odds with any serious understanding of what philosophy is. It is also at odds with a plausible epistemology for identifying the judgments of the “the profession as a whole.” Second, rankings introduce a new and potentially unhelpful set of incentives for decisions in departments, geared to pleasing the rankers so as to move up a slot or two; additionally the language of ranking is needlessly competitive and antagonistic. Third, and most importantly, is that rankings are not needed to provide the relevant information to prospective students.  The kind of information they need is not precise rankings, but general information about what “the profession as a whole” thinks of the departments they’re considering. I would bet that rough classifications into groups (great, good, fair, bad), so we can say “X University is good in Area Y,” would be more than sufficient for this purpose.

(4) The American Philosophical Association should not rank departments or otherwise be involved in the evaluation of departments’ reputations as to philosophical quality (barring unusual circumstances). As I said in a comment on the other thread, it is not clear whether such evaluations fit with the official mission of the APA. But more importantly, though the evaluation of departments might be helpful, such evaluation is fraught with potential side effects that a representative organization like the APA should avoid. As I said in the earlier comment, “an organization that is purportedly there to serve the interests of all members of the profession will find itself instead alienating a good number of those members if it gets into the rankings game.” This is not to say that the APA ought not to collect and make available information relevant to evaluations of programs, as it does. It might even expand such work. UPDATE: Here’s a link to the APA’s own statement about rankings.

(5) The ideal evaluations or set of evaluations should reflect that there are various types of philosophy worth studying. It has long been noticed that the PGR is biased towards LEMming-heavy departments. But there is something strange about a department’s predominant areas of study having an influence on the assessment of the department’s quality. Favoring that kind of influence would be like thinking that one history department is better than another because the first has many people who focus on, say, Africa, while the latter has many people who focus on, say, South America. Additionally, I share the concerns of others that the PGR has overstated the extent to which there is agreement over who is doing good work (a concern that would apply to a single APA-run evaluation, too). I think that what would be most useful, and most reflective of actual opinion, are multiple aggregators of information or evaluations put out by different individuals, groups, or organizations who are transparent in their interests and methods. I also think that a customizable evaluation, as Noelle McAfee suggests here, would be useful. Lastly, we should be on guard against the concentration of power, and a plurality of rankings would help with that too.

[This post has been updated to correct several typos.]
guest
30 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Matt
7 years ago

“But there is something strange about a department’s predominant areas of study having an influence on the assessment of the department’s quality. Favoring that kind of influence would be like thinking that one history department is better than another because the first has many people who focus on, say, the Africa, while the latter has many people who focus on, say, South America.”

I’m not sure I’d agree with the first part, and think the analogy doesn’t obviously hold, either. There would only be “something strange” about this if we thought that all areas of philosophy were equally important or “central” in some sense, but I think that most people, on reflection, probably don’t think that. I’m going to assume that few, if any, departments can employ people with high skills in all areas. (I say that as someone who mostly writes in areas thought to be less central than many others.) Obviously, different people can have different ideas about what is central or important, but it doesn’t seem at all strange to think that, if we are looking at the over-all strength of a department, or its ability to provide training in philosophy, insofar as that is related to faculty quality(*), that a department with great strengths in “core” areas would tend to be better than one with its strengths in more peripheral ones.

To take what I’d hope to be a mostly non-controversial example, if the only people in one department who do history of philosophy all focus on, say, the British Idealist (a group I think is under-valued!), and another has equally good people who cover Kant, modern philosophy, and ancient philosophy, it would seem strange to me to not think that the later place was better for studying history of philosophy(**). As this sort of thing generalizes, it seems plausible to say that the over-all quality of the department would go up, too.

The analogy with history departments doesn’t seem very close or right to me (though maybe people who know more about the history profession could make a good case for it) because it’s not clear to me how the study of one geographical area could be more central to the study of history than another. A better analogy might be something like whether it’s more important for a history department to have people who specialize in, say, economic history or history of technology on the one hand, or history of popular culture or labor history (just to pick examples semi-randomly) on the other. I don’t have informed opinions on this, but expect that historians do, and that they may well have good reasons for it. But if that’s so, then the analogy doesn’t support the claim.

(*) I think it’s clear that this relationship is pretty loose, but not unimportant.

(*)(*) Obviously, student interest is hugely important, and can suggest very good reasons for going to a “weaker” or “more narrow” department, but as Tom Hurka rightly pointed in some other threads, because interests often change greatly during grad school, an over-all ranking is still pretty useful.Report

Justin
7 years ago

Matt, I agree that breadth can be relevant to quality, so I should have made that clearer. Thanks. However, I suppose I don’t agree about the centrality claim. If I want to study ontology, there will be places that have scholars who will be able to help prepare me to do that very well. If I want to study existentialism, there will be places that have scholars who will be able to help prepare me to do that very well. It is not clear to me that one of these areas is more central to philosophy than the other. Maybe another way to put this is to say that centrality is probably best understood here functionally, in reference to serving the interests of the students and scholars, and insofar as as people will have different interests, different areas will be more or less central. (None of this is to say that, given a set of interests, there aren’t better or worse ways to study them, or better or worse places at which to study them, of course.)Report

Matt
7 years ago

insofar as as people will have different interests, different areas will be more or less central.

This is, of course, right. But I think two other points are missing, still. First, Tom Hurka’s point that because many students interest change, a department with broader coverage will tend to be better than one with more narrow coverage. Secondly, and more directly on the question here, it does seem to me that it will be very hard to be a good philosopher _in almost any area_ if, say, you don’t know some epistemology, some metaphysics, some logic, and some history of philosophy. (I think the last is perhaps the most controversial, but I’d certainly argue for it.) Similarly, not knowing much “general” ethics (that is, not applied, not meta, etc.) will make it harder to be a good legal philosopher than not knowing much legal philosophy will make it hard to be a good ethicist. This seems like a plausible way to see that ethics is more central than legal philosophy.

Of course, people’s views can change on this. I think that at one time, lots of people would have put philosophy of language in the group I put out above, but that seems less common to me (as a description of contemporary philosophy) (and also less plausible, reporting my own view.)

If this is right, then a department that is very weak in these “core” areas will have a harder time giving top training than one that has more strength in them. (Of course, there may still be good reason to pick the program that is weaker in “the core”.) Maybe it’s not right. I’m not certain about it, but it seems to be the case when I look at people’s work, for example.Report

LMMS
LMMS
7 years ago

I thoroughly agree on moving away from ranking in favor of evaluations. I only hope that the department can be evaluated as a whole, not merely in terms of the philosophical strengths of faculty. By this I mean that it would be immensely helpful to prospective graduate students to know about the environment of the department. Is the department diverse and supportive of women and other philosophical minorities? This is an important question that is often difficult to get answered. Faculty typically inflate accounts, claiming their department is highly inclusive, while graduate students that might be honest given the opportunity of anonymity fear the repercussions of speaking out (if anonymity is not provided). It would be wonderful if the PGR considers this so that the information can be centrally located.Report

slacfac
slacfac
7 years ago

“Nonetheless, the reason something like this is helpful is that it is from “the profession as a whole” that most graduates will be seeking employment, and so it would be helpful for them to have a sense of what “the profession as a whole” thinks about the places they are considering getting their PhD’s from.”

This strikes me as obviously false. Job seekers are not looking to work for “the profession as a whole” but for individual departments full of their own prejudices, biases and opinions. For example, I did my graduate work at a department that, after a series of bad experiences, will probably never interview another job candidate from Princeton. If you want to work at that particular school, knowing that Princeton is well regarded by “the profession as a whole” will not help you. Similar examples where one’s choice of graduate school, advisor, topic of study, etc. will also immediately eliminate you from working at all sorts of other places can easily be found. I don’t see Rutgers, for example, hiring a Heidegger scholar any time soon. The department where I currently work would never hire someone whose primary area of research wasn’t historically informed. All of this information, while extremely relevant to job seekers (and thus prospective graduate students as future job seekers), is entirely independent of what “the profession as a whole” thinks.

Beyond this, I would also argue that whatever information a ranking system like the PGR can provide to prospective graduate students is redundant. If you are not getting your undergraduate degree (or MA) from a school where your current professors can advise you about how “the profession as a whole” views various departments, then you are already out of the running for admittance to the kinds of places “the profession as a whole” regards as the best. The information provided by the rankings is not enough to change the fact having done one’s previous work in a place far enough outside the mainstream of philosophy that one one can adequately advise you about graduate programs is in and of itself enough to disqualify you from most programs thought of as top by “the profession as a whole.” This is not to say that students at those schools are not worthy of admittance to well-regarded programs or that they could not thrive in such an environment, but to say that circumstances are already conspiring against them such that admittance to such a program is extremely unlikely to happen.Report

Brock
Reply to  slacfac
7 years ago

” If you are not getting your undergraduate degree (or MA) from a school where your current professors can advise you about how “the profession as a whole” views various departments, then you are already out of the running for admittance to the kinds of places “the profession as a whole” regards as the best.”

I’d like to see some evidence for this claim. Do admission committees at the top graduate schools really take undergraduate pedigree strongly into account, in the way hiring committees take graduate pedigree into account? I was under the impression that writing samples were the biggest factor, followed by GRE scores.

If it’s true, then I’d say that’s a far bigger scandal than this current brouhaha over the PGR.Report

Justin Coates
Justin Coates
Reply to  Brock
7 years ago

This isn’t direct evidence for slacfac’s claim, though it is suggestive that undergraduate pedigree does introduce some (further) bias into the process (and as someone who works at a state institution that is in many ways similar to some CSU campuses, this worries me on my students’ behalf):

http://schwitzsplinters.blogspot.com/2011/10/sorry-cal-state-students-no-princeton.htmlReport

anon grad
anon grad
Reply to  Brock
7 years ago

Eric Schwitzgebel has done a nice service in compiling stats on the BAs of those in top programs. Overwhelming, they tend not to come from non-elite places.Report

slacfac
slacfac
Reply to  Brock
7 years ago

@Brock:
It would be wonderful if someone could provide evidence for such a claim. Unfortunately proving that pedigree is a major fact in graduate admissions, as proving any claim about what factors into graduate school admissions, relies upon members of admissions committees honestly reporting, and thus consciously being aware of, what factors influence their admissions decisions. Given the well developed literature on implicit biases and related issues, discovering what factors actually influence admissions decisions seems like a near if not entirely impossible task.* Anecdotally, (read: worthlessly) I can say that several faculty members at a variety of graduate institutions have reported taking graduate applications more seriously if they are accompanied by recommendations letters from faculty members that the admissions committee members know personally or with whose work they are familiar. This strikes me as plausible enough that I imagine it is a widespread practice, but again, that’s not hard evidence.

The other factor that speaks against writing samples being the primary deciding factor is the sheer number of applications that top programs receive. I find it hard to believe that any admissions committee member, or even most committees considered collectively, are actually reading through multiple hundreds of writing samples. If not all writing samples are being read, then some other feature is being used to narrow down the pile before the samples are taken into account. I would imagine that what this factor is (or combination of factors are) varies from department to department, but if I am correct it would mean that for certain students even a stellar writing sample won’t help them because no one at x department is going to read it. It’s plausible to think that grades in philosophy classes, overall GPA, GRE scores, extracurricular commitments, and so on are used to narrow down the pile instead of or in addition to pedigree and recommendation letters. However, it seems unlikely to me that pedigree and recommendation letters play no role in the admissions process at early stages.

The short version: I nor anyone else can provide you with evidence of this claim, but it strikes me (and perhaps others) as plausible for a variety of reasons.

*Unless of course you can (1) point me to a program whose policy stipulates making admissions decisions based solely on review of anonymized writing samples. I doubt any program uses this method (it’s just too much work to read that many samples), but would love to hear if I’m wrong. OR (2) point me to admissions committee members who are willing to go on record saying that they actively consider pedigree in admissions decisions. This is a claim I doubt many people would be willing to make publicly.Report

slacfac
slacfac
Reply to  Brock
7 years ago

Eric Schwitzgebel’s research is something I had forgotten about. I think it’s extremely valuable, and I think it gives a good indication that pedigree bias is a factor in admissions decisions. I wouldn’t call it “hard evidence” because there are, of course, other ways of interpreting the data (i.e. elite schools just do a better job of preparing students for graduate study), but its probably the closest one could come to something like hard evidence. Many thanks to those who mentioned it. Something I should bookmark.Report

DS
DS
Reply to  Brock
7 years ago

“Do admission committees at the top graduate schools really take undergraduate pedigree strongly into account, in the way hiring committees take graduate pedigree into account?”

It seems like there is some evidence for this:

http://schwitzsplinters.blogspot.com/2011/10/sorry-cal-state-students-no-princeton.htmlReport

Dale Miller
Dale Miller
Reply to  slacfac
7 years ago

You’re of course right that any given search committee will have its own likes and dislikes and that having a Ph.D. from a generally well-regarded program won’t necessarily cut any ice with them. However, people don’t choose graduate programs with the plan of getting a position at one particular university. It’s prudent for them to weigh whether choosing one program over another increases or decreases the probability of their getting SOME job, and having some sense of how well regarded the two programs are by “the profession as a whole” is clearly relevant to that assessment.Report

Anonymous
Anonymous
7 years ago

Brock: Eric Schwitzgabel has done research on this, and yes, undergraduate pedigree (unfortunately) plays a very large role in graduate admissions.Report

Brock
Reply to  Anonymous
7 years ago

Looking at Prof. Schwitzgebel’s blog post, it appears that undergraduate pedigree does play a big role, but it looks like it’s based on the school’s reputation, not the department’s. Students from a highly-selective school with an out-of-touch philosophy department (e.g. me in 1992) will have a shot at admission to a prestigious department. To these students, a departmental ranking like the PGR will not be “redundant” as slacfac claims.Report

Michael Cholbi
7 years ago

I’d just like to add that there’s already a survey underway regarding a dimension of graduate education that has hardly been touched on in these recent debates: how grad programs prepare their students as teachers. I’d encourage all those who are grad students, early career philosophers, or faculty teaching at grad programs to take it:
https://jfe.qualtrics.com/preview/SV_a8LYG1hM1iVXGER?Preview=Survey&_=1Report

Tim O'Keefe
Reply to  Michael Cholbi
7 years ago

Michael, I just clicked over on the survey, and it said:

“Inclusion/Exclusion Criteria: The subjects must be 18 and either a (i) Philosophy Graduate Student or early career philosopher (a person whose PhD was granted no earlier than August 2011) or (ii) be a faculty member in a Philosophy Department that confers graduate degrees (MA or PhD).”

But when I clicked through and said that I was neither a “Philosophy graduate student” nor “an early career Philosopher, a person whose PhD was conferred no earlier than August 2011,” the survey bounced me out, saying that I was ” not one of the populations we are targeting.”Report

Tim O'Keefe
Reply to  Michael Cholbi
7 years ago

By the way, we have a bunch of grad students here at GSU who are either preparing to teach their own classes or are currently doing so. Would it be appropriate for me (and other DGSs) to e-mail our grad listserv giving your link?Report

Michael Cholbi
Reply to  Tim O'Keefe
7 years ago

Tim.

The correct link for surveys for grad faculty is here:
https://jfe.qualtrics.com/preview/SV_d5THtWMGvYcXbwN?Preview=Survey&_=1

Please distribute both surveys widely. ThanksReport

Tim O'Keefe
Reply to  Tim O'Keefe
7 years ago

Thanks Michael.Report

cms
cms
7 years ago

I wonder whether one can improve measuring “the profession as a whole” beyond what the Leiter report has done, that is, providing a rough statistical analysis of what institutional core players think. Many areas of philosophy are highly interdisciplinary, which I consider as one of the core functions of philosophy. Other fields are more or less self-contained, which is not bad either. In some strong departments, Philosophy is not alone; others focus on a few sub-disciplines. Following Justin’s proposal too closely, we would create the statistical fiction of an average philosophy department – without interpreting this in the more specific sense of, say, providing a broad education or assembling two more specific strengths. Already the Leiter report could suggest freshly minted PhD how highly their application would be ranked before it gets a closer look by specialized committee members and non-specialized faculty who care about “profession as a whole” respectively. I think that one can take a closer look both at applicant files and by means of statistics. But this is cumbersome. This I read the proposals to enlist APA to this end rather as an expression that one needs funds and expertise to do so properly.Report

Carolyn
Carolyn
7 years ago

Readers might want to take a look at Marcus Arvan’s post at the Philosophers’ Cocoon on a planned survey of graduate students, here: http://philosopherscocoon.typepad.com/blog/2014/09/development-of-a-grad-student-report.html .Report

Laura Grams
Laura Grams
7 years ago

Thanks for the helpful discussion above. I’m completely convinced the APA should have no control over an evaluation system, because I think that would inevitably undermine its responsibility to serve all members of the profession. The APA includes many who, like me, attended a grad school some people think of as an expletive, or teach in schools with only an MA program or no grad program at all. All should feel equally represented in their professional organization, and all are serving an important role.

I too am fairly sanguine about the ability of today’s students to locate program information online. Yet their task will be less time consuming and confusing if aggregate evaluations are available. I find the greatest benefit in rankings within areas of specialty, since (1) most students have some idea what they’d like to study, and (2) the evaluators can give more accurate and informed ratings in the sub-fields they know and follow closely. A generalized rating is nevertheless important, especially for those who aren’t sure what to study or will change specialties during grad school. To avoid the problem of speculative judgments from raters who are most familiar with their own fields, the general rankings should include a stronger emphasis on factors like faculty or even student publications, placement data, and so on. Prof. McAfee’s idea of a customizable search makes sense here. Offer as much searchable data as we reasonably can, and let the students decide what factors are important. Programs also will be able to use this information in their self-evaluations and institutional reviews.

A few more suggestions about an evaluation system:
— I don’t think it should include ratings like “bad”. I’m likely biased by recent events, but I worry about the negative consequences of such judgments and I’m not aware of any compensatory value provided by that sort of rating. Isn’t it enough to point out which programs are good or top-rated?

— I like the way the PhilJobs and PhilPapers sites organize areas into general categories like “History / Traditions”, “Metaphysics and Epistemology”, “Science, Logic, and Mathematics”, and “Value Theory”. This isn’t the only way to classify, but is one means of offering generalized rankings without having a one-size-fits-all master list. Detailed specialty ratings could be presented in each sub-field below a few broad categories, and then an aggregate list could be collected for everything in that one broad category, like History. The student who wants a good education without an AOS in mind can consult aggregate ratings for those 3 or 4 broad categories and get a sense of which programs are excellent overall.

— It’s a great idea to include a survey of grad students about matters like mentoring, teaching, climate, and student support. However, it should be kept separate from the program rankings in these subject areas, as I see no meaningful and accurate way to combine the two matters. Students can consult one set of ratings based on faculty perceptions of quality in areas of the discipline, and then consider student evaluations of their experience. If a student survey is done, I also think it should be updated every year and aggregate data made available for scrutiny, as conditions “on the ground” change frequently and the small sample sizes at individual schools are a concern.

— Another reason that students need a sense of what the “profession as a whole” thinks about areas that are deemed “central” (which is not the same as “better”) is that most faculty teach beyond their own area of specialty. Faculty at some schools may be called upon to teach a great variety of courses, so it is important that grad students emerge prepared to teach most courses generally offered as intro and mid-levels for undergrads. Doubtless we’ll all have a different list of what’s “central” for undergrads, but at least a rough consensus on those areas could be found as a guide.

Finally, I can’t let “slacfac”‘s comment go without a chuckle because in my own department, we’ve had a recent history of repeated positive experiences hiring Princeton grads! The point being, mileage may vary. Coming up with a ratings system that respects this, while providing as much objective information as possible to prospective students, is a worthy goal. I don’t mean to imply that the PGR did not accomplish a lot of this already, but since it seems likely that other ratings systems are going to be developed, these are the things I’d appreciate.Report

slacfac
slacfac
Reply to  Laura Grams
7 years ago

Just to clarify, I meant no offense against Princeton. I know some lovely people from Princeton and currently have colleagues from Princeton with whom I love working. The point was just that the opinion of the “profession as a whole” can have little bearing on the way a particular search committee decides who to hire. Search committees aren’t comprised of “the profession as a whole” but particular people with their own quirks and biases.Report

Laura Grams
Laura Grams
Reply to  slacfac
7 years ago

Totally agreed. Each group will have its idiosyncracies; I assume this supports your general point.

Do you think there’s still an important role for one overall rankings list, or should ratings be limited to specific areas? The more I think about it, the more I like the prospect of many specialized ratings conducted by experts in those fields, along with general rankings in a handful of broader categories (e.g. Ethics, History, Logic/Math/Science) which would serve as the most general measures. Students who weren’t sure what to study would notice which programs appeared highly-rated on multiple lists. I know people would disagree about how to define the broader categories, but PhilPapers has a decent system and I’m not sure an ideal solution is possible. In short, this gives students a bit more to latch onto than dozens of specialty lists, while avoiding some problems associated with a single general rankings list.Report

Tom Hurka
Tom Hurka
7 years ago

A comment on the proposal to replace rankings with broader groupings, e.g. great, good, fair, bad, or top quartile, second quartile, etc.

As I said in another thread, this was seriously discussed by the PGR board as an option and rejected for the following reason. It places a huge amount of importance on the divides between the groupings, e.g. between great and good or between the first and second quartiles.

Imagine a ranking of 20 departments from 1 to 20. This can be misleading in many ways, e.g. suggesting that there’s a significant difference between #5 and #6 when in fact they’re virtually indistinguishable or the ranking is wrong and #6 is better.

But now imagine that you instead group these departments in the way suggested, i.e 1-5, 6-10, etc. This is potentially much more misleading. It suggests that there’s a *huge* gap between the #5 department and the #6, when there may not be any. And it suggests that there’s no gap, or none worth mentioning, between #1 and #5 or between #6 and #10. The board concluded that, while rankings have their problems, the potential for being seriously misleading was greater with groupings. They require the drawing of what will often be arbitrary boundaries, with the decision which side of the boundary a given department goes having very large implications.Report

Laura Grams
Laura Grams
Reply to  Tom Hurka
7 years ago

Really good point.Report

Tom Hurka
Tom Hurka
7 years ago

A little follow-up: the main point is that no way of doing evaluations is totally problem-free. They all have advantages and disadvantages, and these have to be weighed carefully against each other.Report

David Wallace
David Wallace
7 years ago

I’m in favour of keeping PGR in roughly its current form (setting aside, as per the OP, who runs it):

(1) I don’t agree that it ought to be controlled by some generally accountable body, because (i) I don’t think any such body exists (as a UK philosopher, the APA does not represent me); (ii) I am ambivalent about the degree to which extant organisations have the democratic credentials to really speak for philosophers even within a country; (iii) a ranking process needs a sort of elitism that is at odds with the representative function of a subject-wide representative body; (iv) most importantly, I think any given report should be treated seriously – or not – based on its methodology and reputation, not on what official status it has. (I’m all in favour of other people setting up other rankings or reports if they want to, and letting those acquire reputation and be assessed on methodology; I think an empirically sound central list of placement statistics is a good idea, for instance.)

(2) I don’t see a principled case against ranking: as I’ve said elsewhere, assessing and ranking is something we do all the time in very many categories (undergrad and grad admissions, grading of student work, peer review of articles, shortlisting and selecting for jobs, grant assessment, tenure assessment,…)

(3) I think a global ranking (over and above speciality rankings) is sufficiently sought-after by potential graduates (and, to a lesser extent, by job-seekers) that simply deciding not to produce one will just mean that people seek some proxy elsewhere. (I suspect that proxy would just be the preserved-in-amber 2011 rankings, but that’ s by the bye.)

(4) Tom Hurka makes the case very powerfully that a total ordering is to be preferred to tiers.
(I’d add only that a total ordering combined with the actual marks, as in PGR, makes it plain to the reader whether a given gap is large or small; yes, some readers will ignore this, but no such thing is immune to being misinterpreted.

(5) Perhaps this shows my small-c conservatism, but setting up a ranking system and running it in a consistent fashion for long enough that it acquires a reputation is a huge task. My experience of volunteer projects (from student welfare to computer games) is that they nearly always turn out to be much harder to set up and sustain than it can seem at the outset, and so a large fraction peter out. The very fact that we have a working ranking system that has remained functional for two decades or so ought to make us very cautious about tearing it down and trying to build some shiny new thing from scratch, rather than making incremental improvements; we easily might end up with nothing. (Obviously, this argument will have no traction for someone who thinks the PGR is useless or worse than useless.)Report

James
James
7 years ago

This has been said before (by me, at least) in previous debates on this topic, but it’s worth repeating. From a British perspective, the PGR’s overall ranking is extremely useful when it comes to choosing a Master’s course, but worthless at best when it comes to choosing a Ph.D. This is because, going into a Master’s course here, the student who has just finished their undergraduate probably does not know much about philosophy, and does not know what interests they will develop over the course of their study: so going to a broad department that is more likely to have some scholars in whatever area the student finds themselves grabbed by is good (as has been noted above), and – this I think is even more important – going to a department which by its high prestige has attracted a whole bunch of students who are motivated to work hard to earn the place they have received is good. This, in my experience, creates a great community of bootstrapping each other. (There’s got to be a less painful-sounding phrase for that.)

But by the time you get to the Ph.D., and are doing your own research, what’s much more important is having a good supervisor and (to a lesser extent) being in a department that’s strong in the areas you care about. The PGR speciality rankings are good for this latter, granted (and Justin Weinberg’s idea of evaluation is sort of found here already, if I understand him); but if your area is ‘non-core’ (I agree with Justin that this designation says more about philosophical fashion than anything else, even if it’s not totally vacuous), the PGR overall rankings can be more distracting than anything.

All of this is, of course, different in the U.S. where you go to the same department for the taught and research parts of your degree. I would like to see the PGR continue, but I would be very grateful if it was made clear that, with regard to British departments, the PGR overall rankings are a guide for prospective Master’s students only, and if there was some note along the lines that prospective research students should attend only to the speciality rankings, or even ignore the rankings altogether and get to know the work and reputations of potential supervisors.Report

Jay Garfield
7 years ago

The imminent departure of Brian Leiter from the helm of the Philosophical Gourmet Report gives us an opportunity to rethink the role and structure of such a report in our profession. I offer these remarks as a contribution to that rethinking.

On the whole, while the PGR has provided useful information to many, and while a central source of information about graduate programs is a potential benefit to us, I honestly believe that it has done more harm than good to our profession. Here’s why:

By collapsing a wealth of information into single ranking numbers in a league table, the report reduces, rather than increases the information available to its consumers, and this in two respects: first, it elides much detail that is important in this collapse. While that information is available elsewhere, the overriding effect of the number that synthesizes that information is inescapable. Second, the mechanism by which information in reduced to that single ranking number is occult: it reflects the explicit and implicit biases regarding what is important of those who do the ranking. And sometimes, I believe, those biases are pernicious. Does anybody, for instance, seriously believe that extraordinary strength in Chinese philosophy would life a department in the rankings over one with pretty good strength in Anglo-American metaphysics and epistemology? And can we defend that relative weighting? More important, why should we allow others to decide for consumers what kind of weights to use?

A consequences of this is what I see as the real harm done to us by the PGR so far: it significantly narrows our collective self-conception of what is important and “central” to our discipline (often in pernicious ways) in virtue of moving people to value what is done in “top Leiter-ranked departments” over what is done in others, increasing the devaluation and occlusion of, for instance, work in phenomenology, feminist theory, and all of non-European philosophy; it encourages our profession to become more narrow as departments working to “move up in the Leiter rankings” become more like the departments already at the top, skewing curriculum and job prospects to this model in the process; it encourages hiring committees to focus on candidates not only who will move them up in this narrowing process, but also to focus on candidates from departments near the top of those rankings, disadvantaging, for instance, candidates who studied at the best department in their speciality, which might, because that specialty is not as highly valued by those constructing the rankings, might not be highly ranked. After all, if you want a good Chinese dinner, you don’t, having discovered that Daniel is ranked at the top of the New York Michelin guide, go to Daniel; you look for the best Chinese restaurant around. And it is a good thing that a city has a variety of cuisines represented, and that restaurants are reviewed on multiple, not single dimensions. That is gourmet reviewing at its best, but it is not what we do.

All of this is happening at time when our profession should be working to broaden, rather than to narrow its scope, to create more diversity among departments, not less, as all of the other disciplines in the humanities and allied social sciences are doing. We should be encouraging the proliferation of departments that address divergent issues and traditions, not discouraging them. This would make our profession better, and would produce more interesting philosophical work, and more opportunities for our students.

I hence propose that we replace the current one-dimensional ranking system with a two-dimensional database, that might look something like this: Rows for departments, and lots of columns. We might have columns for many different areas of philosophical interest, including things like, anglo-american analytic m&e, Orthodox Indian philosophy; ethics (not just western!); classical chinese philosophy; American pragmatism; ecophilosophy; logic; philosophy of science; native American philosophy…. Etc.. Lots of columns. But ALSO columns for gender balance in faculty, and in graduate students, ethnic diversity, placement record in R1 departments, placement record in non-R1 departments, etc… Each column would be assessed by an expert committee, and each cell would have a number, say, from 1-7, with 7 indicating being world-leading in this area, and 1 indicating no representation at all.

A student looking for a department with strength in pragmatism and modern Japanese philosophy could sort the database looking for departments with high scores in those areas, and could compare them by seeing where else they were strong or weak. Perhaps gender balance means a lot to him, or perhaps, given his long-standing desire to teach in a liberal arts college, the fact that one had a great placement record in this domain would tip the choice. A student who wanted to study philosophy of science and chinese philosophy could look for departments where both are strong. She might then prefer the one that also had strength in classical Greek philosophy, etc… A student just looking for as much strength across the board might look for the departments with the most 7’s and 6’s, and then would note that where they were strong differed. One who just wants a job in an R1 could sort by that column. More information would lead to better decisions.

Moreover, search committees, instead of asking whether a student with a dissertation on Buddhist epistemology came from a “top 10 department,” would ask whether the department where she did study was strong in epistemology, and in Buddhist philosophy, etc…. More information would lead to better decisions.

Finally, departments thinking about their long-term development would not ask how to “rise in the rankings” but might look to see where there are niches in the profession they might occupy. Perhaps very few departments are now strong in modern Chinese philosophy. There might be room to build and to occupy that niche; perhaps there is a need for more departments doing both phenomenology and cognitive science. One might build there. This would lead to more diversity in our profession. Perhaps in a few decades, a department would note that few people are doing Anglo-american analytic metaphysics and epistemology, and try to build strength there.

We don’t need league rankings. We need information. We don’t need a narrow profession. We need a broad one. We don’t need to establish a common set of weightings of importance for areas of specialty; we need to allow our members and students to develop their own preferences. Replacing the Leiter table with a database like this would move us in that direction. Think about it.Report