Enough Ranking (guest post)


“The project of ranking one place against another completely distorts the sorts of aspirations we should have for the field.”

In the following guest post*, Robert Pasnau, professor of philosophy at the University of Colorado, Boulder, discusses the Philosophical Gourmet Report, an influential and controversial ranking of graduate programs in philosophy based on surveys that ask about faculty reputation.


[Katie Paterson, Zeller & Moye, “Hollow”]

Enough Ranking
by Robert Pasnau

For several decades now, the field of Philosophy has ranked its English-speaking, PhD-conferring programs. It’s an unusual thing. No such prominent efforts exist, for instance, in Economics, English, History, or Political Science. For a long time, this so-called Gourmet Report seemed to me a good thing. And I’m still prepared to grant that the methods used, albeit crude, do roughly track faculty reputation. Even so, it has become clear to me that the effort does not provide the intended benefits, and actually does our field harm. We should stop participating in these rankings, and we should stop encouraging our students to consult them.

Has the world changed or have I changed? In part it’s the world. As the field steadily grows, there are more and more extremely talented people everywhere. If one had the money, one could build a great program from the ranks of dazzling, underemployed junior scholars. The problem is that almost no one has the money, and so for all but a few departments their faculties are more or less fixed, for the foreseeable future, by budgetary constraints. Of course, there will continue to be junior hires being made, as senior folk retire, but the era of grand program building seems, for now, to be in the past. And as the money thins out, we can expect the talent to become increasingly spread out among a great many departments.

The usually cited justification for this exercise is that it’s useful for prospective graduate students. I don’t doubt that attending an elite program offers significant career advantages, but I don’t think we need these rankings anymore to tell us which programs are beneficial in that way. There’s now all sorts of readily available information on the internet about faculties and their research profiles, and about graduate programs and their placement records. (Placement data for Philosophy, aggregated and analyzed, is available here. It should be said that the practice of publicizing placement records, which we should all applaud, also has its origins in the heroic energies of Brian Leiter, who created the Gourmet Report.) Moreover, once we leave the very top of the Gourmet Report’s list, the rankings are worse than useless—they’re downright counterproductive. Because of the diffusion of talent in the field, schools up and down the list feature barely distinguishable mixtures of great and not so great scholars, teachers, and mentors. Suppose school A has a faculty that’s 10% stronger by scholarly reputation than school B. That makes for a significant difference in the rankings, but it’s a preposterous reason to attend A as a graduate student, for various reasons. First, not only is the difference slight, but it’s likely to be completely washed out by a student’s choice of research area. Who cares if A has a world-renowned logician, if you’re not studying logic? Moreover, as anyone who’s been through graduate school can attest, sheer scholarly reputation is just one small part of what makes for an excellent teacher or advisor. We can all think of people who single-handedly raise a department’s reputational ranking by several notches and yet, through negligence or worse, make for disastrous graduate-student mentors.

In addition to its overall rankings, The Gourmet Report ranks departments by specialization, and this is often cited as the survey’s most useful feature. No doubt it’s helpful to have a list of which programs concentrate in which areas. But here too the rankings are positively counterproductive, because again these rankings are based exclusively on research reputation, and that’s not mainly what should matter to prospective graduate students. Obviously, one wants teachers and mentors who are talented philosophers and engaged with their areas of research. But these days that’s a bar that’s easily crossed. There are brilliant, energetic philosophers everywhere, up and down and off the Gourmet rankings. You can likely find someone well qualified to supervise your dissertation at the local community college. And once that threshold is crossed, what one really wants from a mentor is all the other obvious things: someone who’s generous with their time, who’s encouraging and maybe even inspiring, who’s got a knack for offering productive feedback, who can offer sage practical advice. The rankings do not even attempt to track any of this. And yet this is what should matter most to a prospective student.

As things are, these rankings, despite tracking small and merely reputational differences between departments, have a massive influence on where graduate students enroll. The reason for this is obvious: the Gourmet Report seems to offer an objective and decisive basis for deciding among programs. It would, however, literally be better for a student not even to look at these rankings. Instead the way you should choose a graduate program is to figure out where there are people doing the kind of work you want to do. Apply to those programs. Wait to see where you’re admitted (or, more likely, waitlisted). Once you’re admitted, the real work begins. See if there’s a significant difference in placement records. Visit schools, if you can. Think about what it would be like for you (and your family?) to live there; don’t just say this doesn’t matter, because grad school is hard in all sorts of ways, and you need to do whatever you can to take care of yourself. Get detailed information about the funding available. Talk to the people you’re interested in working with, but don’t put too much weight on that. (Being a great mentor has nothing to do with being a great conversationalist.) More importantly, talk to the grad students at that program, and, most importantly of all, talk to the students who have worked in the area you’re interested in working in. Don’t settle for vaguely positive enthusiasm; find out what it’s really like to work with the professors that are drawing you there. Students who follow this approach, rather than letting the Gourmet Report guide them, will be much better off, and the field will be better off as a whole, as good students connect with the faculty, across dozens of departments, who will best allow them to flourish.

The trouble with the rankings, however, goes beyond a failure to serve their intended purpose. As things are, departments can hardly help but care about these rankings, if nothing else because of the difference they make to grad-student recruitment. But because the rankings are just a reputational survey of faculty strength, the only way to influence them is to make certain sorts of hires: the sort that will register with the 200-some philosophers who are willing to take the time to do the ranking. And given how rare it is, these days, for most departments to be able to make a tenure-track hire, a department is liable to feel considerable pressure to conform with the sorts of values that can be predicted to influence the rankers. At this point, my former self would have thought Good, that’s just the pressure toward scholarly excellence that departments should feel. Here is where perhaps I’ve changed, because it seems to me that departments should care about a lot of things other than promoting the mainstream of research excellence in the field. We should, among other things, hire people who will be committed, passionate teachers and colleagues; we should hire people who will diversify the faculty along any number of dimensions; we should hire people whose research will take philosophy in directions we think vitalizes the field. But because none of these things register in a crude reputational survey, departments are under significant pressure to ignore them, in favor of the assumed preferences of the arbiters who take the time to fill out these rankings every few years.

Not everyone will be moved by this last criticism, so let me conclude on a less divisive note. Among the countless bad outcomes of the COVID pandemic, many of us experienced an occasional ray of light when we were able to gather, over a remote video connection, to collaborate in ways we never would have thought to propose during happier, easier times. Many have had the thought that we really ought to keep doing that sort of thing. I want to take that commonplace and extend it: we really ought to stop conceiving of the field as divided up among competing programs, and start conceiving of the field as an international, collaborative activity. We now know how to do this. For any topic under the sun, we can bring together, on fairly short notice, the leading people in the field, together with various bright young voices, and we can do philosophy together. In a world such as this, it’s an unproductive, wrongheaded exercise to think of departments as pitted against one another. The goal should be to allow someone who studies and works in one part of the world to be able to study and work with scholars across the globe. The project of ranking one place against another completely distorts the sorts of aspirations we should have for the field. Rankings divide. We should devote our energies to bringing people together.

USI Switzerland Philosophy
Subscribe
Notify of
guest

45 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Filippo Contesi
2 years ago

“[A]n international, collaborative activity”, which includes non-Anglophone countries? https://contesi.wordpress.com/bp/

Jeff
2 years ago

There is much to agree with here, especially the point that an outstanding department can be built from underemployed philosophers. At the same time, what critiques of academic rankings almost always miss is the way that these rankings provide access to social capital, especially for people in under-represented and under-resourced groups. As a faculty member, I have major qualms with US News rankings. As a high school student, these rankings were the only way I found out about an entire class of schools: elite small liberal arts colleges. When deciding between places I had no idea about–Skidmore? Williams? Denison?–having the rankings in hand were useful.

I don’t think I am alone here. Rankings are imperfect and harmful, but they also help students without good mentoring or access to social capital understand something about how an influential group of people see the world.

It is too easy to deceive ourselves that the mentoring we offer at our program is somehow superior to that being offered at places with more social and actual capital. Students need to have access to several data points when making an important life decision like choosing a college or a graduate program. A student should never select a place simply because it is ranked highly. At the same time, I’ve seen too many students offered pretty aggressive sales pitches that make it less likely that they will do more independent research to discover the potential limitations of the school offering the hard sell.

JTD
JTD
Reply to  Jeff
2 years ago

Presumably, “students without good mentoring or access to social capital” need rankings because they want to pick a graduate school that will maximize their chance of getting an academic job post-PhD. But then doesn’t Pasnau make good points about why the PGR rankings fails to capture most of the factors relevant to these and thus can mislead. Indeed, doesn’t the graduate placement data provide a much better ranking if what you want to know is which institution will give you the best chance of getting a job. Insofar as the perceived research strength of the faculty at your PhD-granting institution matters for you job chances, presumably it will be reflected in the job-placement rate that your institution has. Potentially, many “students without good mentoring or access to social capital” are currently making suboptimal decisions because they are basing their decisions primarily on PGR rankings, whereas if they instead used job-placement statistics they would get much better evidence of how the various relevant factors, including perceived research strength of the faculty at your PhD-granting institution, interact to produce a better or worse chance of a post-PhD academic position.

David Wallace
Reply to  JTD
2 years ago

“Insofar as the perceived research strength of the faculty at your PhD-granting institution matters for you job chances, presumably it will be reflected in the job-placement rate that your institution has.”

The problem is that it’s backward-looking, i.e. 8-10 years out of date. (It’s also too coarse-grained to help with specialty rankings.)

JTD
JTD
Reply to  David Wallace
2 years ago

I take the point about the usefulness of specialty rankings. I hope this problem with the job-placement data can be addressed in the future and we can see placement rates for students in each specialty.

On the other point, let’s suppose (as seems plausible) that a higher perceived research-strength of your PhD-granting institution is one factor that will help you get a job. Do we even have good evidence about why this is so? Is it because (1) hiring committees will find a candidate more attractive if they graduated from a department with a higher perceived research-strength; or (2) departments with higher perceived research-strength tend to better develop your research excellence during your time as a PhD student there. (Its probably a bit of both, but roughly how much of each?). With regards to (1), the PGR is actually 8+ years out of date because what matters is the perceived research-strength of your PhD institution in 8+ years time when you are on the job market, not its perceived research-strength when you are applying to it. With regards to (2) there also seems to be a time delay. The crucial years for the development of your research excellence seem to be several years into your program when you start the crucial part of the work on your dissertation.

Given this, your point seems to be that although all of these rankings/data are “out of date” for the needs of the student, the placement data is more out of date than the PGR data. I take the point. But the crucial question is: Of the several important factors concerning your PhD granting institution that will impact your chances of getting a job, which is more useful:

(A) A ranking (PGR) based on one of these factors only that is at least a few years, and possibly 8+ years out of date?

(B) A ranking based on the interaction of all of these factors (job placement rates) that is 14+ years out of date.

It seems to me that (B) is more useful. But clearly the truth of the matter depends on what exactly the other factors are that effect your job-chances, how much impact each factor has, and how much variation there is over periods of 2 to 14 years in how different departments rank on these factors.

David Wallace
Reply to  JTD
2 years ago

I don’t want to denigrate the placement data. I think they’re useful too, for the reasons you give – albeit I think it’s necessary to drill down into where people get jobs and not just look at the numbers. I’m not even sure I’d say they’re less useful than the PGR. My answer to your ‘which is more useful, A or B’? is ‘A and B’!

You’re right, of course, that people move on and that a department’s perceived strength can change during the course of a student’s PhD. That’s mostly only going to matter if it happens fairly early on, though – if, say, someone did their philosophy of physics PhD in Oxford 2013-2017, worked with me formally 2013-2016, and carried on seeing me periodically for the next year, it’s not really going to matter to them (either for intellectual development or on the hiring market) that I left in 2016.

(I don’t think you could really generate specialty rankings on the basis of placement data, unfortunately, except maybe in a few exceptional departments – the absolute numbers are too small so there’d be too much noise.)

JTD
JTD
Reply to  David Wallace
2 years ago

My answer to your ‘which is more useful, A or B’? is ‘A and B’!

I should have phrased the question as: “If you are an undergraduate trying to decide which grad school programs to prioritize, which ranking should you give more weight to, A or B?”

You and Jeff do a good job pointing out how challenging it can be for an undergraduate trying to make this decision. At the moment, if they were to search online for information about which programs to prioritize they would find a lot of stuff pointing them towards the PGR and not much stuff pointing them towards the placement data. That is bad if they should be giving much more weight to the latter.

Regarding perceived research-strength of institution changing during a PhD, I don’t think your example generalizes. Often it is the general perceived research-strength of the department rather than of a specific individual you formally worked with or a narrow specialty of the department. So, suppose that the general perceived research-strength of my department drops significantly a few years before I go on the market. I suspect that present bias means this will effect how hiring committees assess me more than my departments standing five years earlier when I was in the middle of my program.

David Wallace
Reply to  JTD
2 years ago

I’m not sure I could break it into a ‘more weight’/’less weight’. I think students should look at both, although I think it’s an illusion to think that placement-based rankings are straightforwardly objective – how you compare placements matters a lot, as do issues like how long placement takes, post-docs, etc. (My preferred ranking, based on network centrality, gives quite different results than APDA’s straightforward percentage-placed.

I guess ideally you should be applying to places with good PGR (esp.specialty) and good placement. If you’re considering somewhere with good PGR and bad placement, that ought to be something you worry about – maybe it’s because of lots of faculty turnover, but maybe it’s indicative of departmental malaise that doesn’t show up on PGR. If you’re considering somewhere with good placement but bad PGR rank, again ask why – maybe it’s because the department is really good in a way that PGR has failed to capture, maybe it’s changed a lot recently.

(On my example: I can only go on anecdotal experience from hiring committees, but I don’t think people are just looking up the current PGR score when judging applicants’ departments. Insofar as you know enough about a department to notice it on someone’s application, you probably know enough to be aware of how it’s changed recently – and I think who actually supervised you / was on your committee plays a bigger role than you might think. But of course any one person’s experience of hiring is a narrow segment of a much bigger picture.)

David Wallace
2 years ago

1) I think Professor Pasnau significantly underestimates how difficult it is for an undergraduate student to ascertain which departments are very strong in their field. In philosophy of physics, say, the 2017 PGR identifies Oxford, Michigan, NYU, UCI, USC, and UCSD as the strongest schools, with Cambridge, Columbia, LSE, Arizona, Minnesota, Pittsburgh, and UWO in the next tranche. In the majority of cases, these are not well-known schools. A student could just trawl through the websites of the top 100-odd research universities in North America and count faculty with a philosophy of physics AOS, or they could try to work out what the good journals are in philosophy of physics and go through the last few years of them, looking for who’s publishing and where they’re based – but both of these are really significant research tasks, even before you start trying to make allowance for variations of strength betweeen faculty members. A student with good research skills would have to give (I’d guess) a good couple of weeks of time, in the middle of full-time study, to approximate the PGR speciality rankings; even having done so, the list they produce would be at most no better than the PGR list, which they could have checked in ten minutes. 

(Students at universities whose faculty are fairly central in their subfield’s research network don’t benefit that much from the PGR lists, because they can get expert advice. But that’s a small minority – and a small minority that, I’d guess, contains disproportionately few students from historically underrepresented groups.)

2) Relatedly: It’s halcyonic to suppose that students won’t use rankings, or that students in other fields don’t use the PGR. Absent the PGR, they’re going to use things like US World News rankings, traditional prestige indicators (i.e. Ivy League), and the anecdotal advice of their undergraduate faculty. The issue is whether the PGR is more evidentially reliable than these indicators.

3) As far as I can tell, Professor Pasnau is assuming (i) that provided your adviser clears a certain research-expertise threshold, it doesn’t matter how good their research is after that, and (ii) that threshold is fairly low (“Obviously, one wants teachers and mentors who are talented philosophers and engaged with their areas of research. But these days that’s a bar that’s easily crossed. There are brilliant, energetic philosophers everywhere, up and down and off the Gourmet rankings. You can likely find someone well qualified to supervise your dissertation at the local community college.”). I pretty strongly disagree with this. If I try to list faculty in North America well qualified to supervise a dissertation on, say, effective field theory, I struggle to get far into double figures. And while in less technical fields it’s less flat-out impossible to advise on a subject, there’s still a huge difference between ‘basically competent in the core issues’ and ‘actively familiar with the current state of the field’. (I know the core issues in free will adequately; I could probably teach an UG survey course on it, with a bit of work; I’m ridiculously less well placed to advise a student on it than, say, Kadri Vihvelin.)

4) I agree with Professor Pasnau that research is best done in a collaborative and global spirit, and thrives when you bring together small groups of experts and junior people in focussed discussions (indeed, I’d say that small, well-designed in-person workshops do that much better than Zoom, and were doing so long before the pandemic). But I don’t think that really affects the issue of where you go for graduate study. I get emails all the time from grad students in philosophy of physics asking questions or sharing papers. I’ll always at least reply; I’ll sometimes write a more detailed response; if I have time I might have a quick look at the paper; I might even have a quick Zoom conversation. But I’m not going to give remotely as much time to that student as I would to a Pitt student in philosophy of physics – there just aren’t enough hours in the day. So if you want to work with me in a sustained way (leaving aside whether that’s a good idea!) you need to come to Pitt.

5) It’s definitely helpful to have placement data. But that data is (i) itself open to interpretation and challenge (how do you handle post-docs? time to placement? quality of the institution you’re placed at?), and (ii) backward-looking (placement can take several years, so the placement data you’re looking at when choosing your graduate school is nearly a decade out of date). A lot of the rationale for PGR is that current faculty quality is a better proxy for future placement than past placement, and there is some evidence to support that (https://jonathanweisberg.org/post/page-rank-1/).

6) Of course there are other things you should look at than PGR ranking when choosing a school – but no advocate of PGR has denied this. The issue is whether it’s *relevant* to look at faculty strength, not whether you should *only* look at it. (Also, a lot of the measures Professor Pasnau advocates – like talking to students, visiting the place, etc) aren’t logistically viable at scale – they only work when you’ve already narrowed down your options a lot.)

7) Like it or not, academia (at least in the UK and North America) has a very hierarchical hiring structure, with graduates of some programs doing way better than others at getting positions in research-intensive universities with strong faculty. We can debate the reasons (commonality in what people are looking for at grad admissions and hiring? benefit to your research from strong faculty and strong peers? prestige bias?) but it’s not really in contention that it works that way, and not just in philosophy. That being the case, it’s good to make it as transparent as possible, both because in general it helps students make realistic plans and have a realistic assessment of their career prospects and because otherwise it will be more transparent to students at stronger UG schools where the faculty is more informed, which has equity/diversity implications. (People sometimes say that the first piece of advice you give to an undergrad asking about grad school is ‘don’t.’ My advice is more like ‘if you can get into a very strong school, or a reasonably strong school which is very strong in your speciality, your job prospects are pretty reasonable; if not, they’re much less reliable; take that into account.’)

(I’d add that I think it’s good, on balance, that it works that way: it’s much better for applicants to know up-front (at least to a degree) what their job prospects are likely to be, rather than spend 5-7 years in grad school before knowing.)

8) A conciliatory disclaimer: the PGR isn’t, and hasn’t claimed to be, as useful for a student who does not want a research-oriented career. If your career goal is to teach at a community college in your local state, then (a) good for you, that’s a totally legitimate career plan and not intrinsically any less valuable than a more research-led career; (b) you’ll want to take a somewhat different approach. 

Edward Teach
Edward Teach
Reply to  David Wallace
2 years ago

As a recent ex-grad student outside North America who has found a job, I found PGR also very helpful for getting a picture of what institutions are worth paying more attention to e.g. what conferences are on an likely to have higher quality papers, what faculty are more likely to be writing papers that carry more weight in the literature or are helping advance the field more, who could be a potential external dissertation examiner. It also seems like the universities here that were higher ranked got more research grants and had more opportunities for e.g. research assistance or contribution to a supervisor’s paper. The placement data is very incomplete for other countries, and I feel like I’d be in a much worse position job market wise if I’d never managed to find the report while googling grad schools.

grad2468
2 years ago
Leroy Brown
Leroy Brown
Reply to  grad2468
2 years ago

The fact is that PGR defenders just want to justify the overwhelming whiteness of philosophy. The fact is I’m going to Clark Atlanta’s Humanities Program. White analytics still think race is a metaphysical problem and are using the tools of metaphysics to solve “the problem” but they are not reading the philosophical attempts to grapple with liberation, struggle, and freedom of the Black intellectual tradition that are indeed philosophers on these issues. Ya’ll exist in a ahistoric bubble where you do not want to interact with history, African American studies, literature, cultural studies, American studies, but you only want a list of departments that do pure philosophy. People like Rufus Perry, Benjamin Elijah Mays, John Edward Wesley Bowen, David Walker, Alain Locke, W.E.B. Dubois are not read, and producing novel solutions to conceptual problems does not address the existential realities faced by my people. So go to your ivory towers. Go worry about what Brian Looper is worrying about in the latest issue of Mind while whites attack voting routes of my people in the South. Go worry about essences and modality or what someone else said about Kant in the safety behind hallowed halls about whether or not someone is going to work with Jesse Prinz on emotional cognition and philosophy of mind.

At least, some of the Continentals know where struggle is to be found and the want to address it, even if also CP is again overwhelmingly white. The fact is there is no philosophy program actively doing archival work to situate a new understanding of what African American philosophy is beyond Ferguson and McClendon. Oh wait, that’s doing history of philosophy, not solving problems…I forgot. (That’s tongue and cheek in case anyone missed that).

JTD
JTD
Reply to  Leroy Brown
2 years ago

White analytics still think race is a metaphysical problem and are using the tools of metaphysics to solve “the problem” but they are not reading the philosophical attempts to grapple with liberation, struggle, and freedom of the Black intellectual tradition that are indeed philosophers on these issues.

This is not a charitable account of what cotemporary “analytic” philosophers working on the metaphysics of race are doing. I don’t think anyone working in this area thinks that all, or even most, of the various problems related to race can be solved by simply by clarify the metaphysics of race. I think they would all agree that there are various ethical and political problems related to race that cannot be solved by clarifying the metaphysics of race.

I also don’t understand your use of the word “white”. Contemporary analytic philosophers working on the metaphysics of race are less demographically “white” than pretty much any other area of philosophy. Do you mean to claim that all of the white philosophers working on this topic are making the mistake you have in mind but none of the black philosophers working on it are making the mistake? That is a remarkable claim that just does not fit with the distribution of various stances and positions taken in the metaphysics of race by scholars from each of these categories. Insofar as there is a problem with contemporary analytic work on the metaphysics of race, it seems to be widespread among scholars who work in this programme regardless of their racial background.

Perhaps your reference “analytic” work on the metaphysics of race was a distraction from you main point. Maybe your concern is really that contemporary analytic philosophers, especially those working on relevant topics in moral and political philosophy, are not engaging adequately with philosophical work on “liberation, struggle, and freedom of the Black intellectual tradition”. I certainly agree that this tradition was unjustly ignored in the past. I’m glad to see that many moral and political philosophers today in the analytic tradition are engaging with it. But without seeing more specific arguments about which particular debates taking place now are failing to adequately engage with this tradition, I have trouble assessing the claim that the current level of engagement is insufficient.

Last edited 2 years ago by Mateo
ehz
ehz
2 years ago

It’s a bit strange that this post calls for no more rankings, but also recommends using the APDA placement data, which is just another way of ranking programs.

Last edited 2 years ago by ehz
Thomas Mulligan
2 years ago

PGR is billed as a tool to help undergraduates (and others) decide where to pursue graduate study in philosophy. Like all tools, it has its flaws. But it it something and it is better than nothing and I am grateful to Leiter and the other philosophers who have sustained it over the years.

The problem with PGR is that it is misused; it is it used to evaluate philosophers, a job it was not designed to do and, unsurprisingly, fails at. The advantages (in, e.g., hiring) which accrue to graduates of PGR-top-ranked programs have been well-documented in recent years and subject to moral criticism. When PGR is used in this evaluative way it makes our profession less meritocratic, and therefore less just.

This sort of thing happens all the time. Take two high schoolers, equally meritorious (same “human capital”, in economics lingo), and send one to UMass and the other to Harvard. They will receive roughly the same quality of education and thus leave with roughly the same human capital. Yet they will not go on to earn roughly the same amount of money: The Harvard graduate will earn much more than the UMass graduate and have much different prospects. This is a result of nepotism, the “halo effect”, and the many ways in which we irrationally and immorally judge people not on the basis of their merits, but because of the groups to which they happen to belong. There is a return to the name, Harvard, above and beyond the return to the human capital that Harvard inculcates.

Now, an argument is sometimes given that we cannot help but rely on these proxies to evaluate people. This is a statistical discrimination argument and, like many such arguments, it is not compelling. For one thing, in the context of professional philosophy there is no need to resort to weak proxies like “pedigree” when much stronger ones–say, the publication record–are available. At the very least, “pedigree” is overweighted.

Second, the argument goes thus: Because the mean pedigreed graduate is more meritorious than the mean unpedigreed graduate (plausible), a selector has reason to give extra consideration to an individual applicant who graduated from a pedigreed program. But it is also plausible that there is more variance in merit among unpedigreed graduates (you tend to get different kinds of people in those places; top programs are more homogenous). And it turns out that this difference in variance gives selectors reason to prefer the unpedigreed applicant! That is, differences in group means and variances have opposite implications for hiring behavior. So even if this model were appropriate, until you quantified the distribution of talent you would not have reason to prefer the pedigreed applicant, or the unpedigreed applicant, on grounds of merit.

David Wallace
Reply to  Thomas Mulligan
2 years ago

One can’t infer from ‘PGR-top-ranked program graduates do better on the hiring market’ to ‘the PGR is being used wrongly’. There are (at least) four reasons people from departments perceived as ‘strong’ might place well:

1) the selection criteria used at grad admission correlate with the selection criteria used in hiring
2) grad students do better philosophical work (on average) in an environment of stronger faculty and / or peers
3) hiring committees pay more attention to people from strong programs as a rational proxy for (1)-(2)
4) hiring committee care about pedigree for its own sake.

(4) is obviously bad and (3) is at least dubious. (It’s at most justifiable as a defeasible heuristic in the initial, oh-my-god-we-have-six-hundred-dossiers, phase of hiring; even that could be challenged.) But if (1) and/or (2) are correct (and fwiw I’m confident they’re correct) then we’d expect disproportionate hiring even if (3) and (4) play no role at all in hiring.

Curtis Franks
Curtis Franks
Reply to  David Wallace
2 years ago

Yes, that inference is bad for the reason given, but people who have been on hiring committees might know first-hand that 3 and 4 are real, and presumably Thomas Mulligan has that in mind. Consider that there is an indirect version of 4: college deans care about pedigree for its own sake; members of hiring committees know this and therefore prioritize applicants with a certain pedigree because it increases the chance that a job offer will materialize from the deans’ office in cases where the advertised position is contingent on funding approval.

Given this, I would be wary of advising a student to ignore rankings until I knew that hiring institutions had started ignoring them, too.

Jeremy Pober
Reply to  David Wallace
2 years ago

There’s another possible reason that you hinted in an earlier post in this thread:

5) Hiring committees put a lot of weight on having letters from famous or Extremely Well-Known (EWK) philosophers, and the PGR roughly tracks the percentage of philosophers on a faculty who are famous/EWK.

Having not been on a hiring committee (it’s actually my first year on the market–wish me luck!!!) I can’t say if this reason is one motivating committees or their members. But it’s consistent with what I do know, including the advice that I was given when choosing a grad program. I’d be curious if you (or other people with hiring committee experience) can verify or falsify the use of 5).

If true, this would be especially salient because evaluating applicants based on letters is (if I have heard correctly) something that happens past the initial stage of “wow, I have to shed hundreds of applicants in a single afternoon.”

Further, if 5) is true, it helps illuminate one of Pasnau’s more salient points that has largely been dropped in the follow-up comments. I’m referring to his claim that, while there may well be a big difference between the top 5 or 10 programs and the rest (and/or between most programs and the last 5-10 ranked, and/or between ranked vs unranked), there’s a huge glut in the middle of programs that are extremely similar in quality (or at least reputation) that ordinal rankings make seem disparate. My sense (limited, admittedly) is that the glut starts in the early teens and ends in the late 30’s for the US program rankings.

This could plausibly be because those programs all have a basically similar percentage of famous/EWK faculty. Though those faculty may be in different sub-fields in different programs, but for a prospective grad student who has not decided on a subfield yet, their chances of having a famous/EWK advisor and/or additional committee member is roughly equal at (say) Arizona and Georgetown. But they are ranked >20 spaces apart, more than the numerical difference between Arizona and NYU, or between Georgetown and unranked programs. This gives a skewed impression about their relative quality (although it may be somewhat ameliorated by looking at the raw scores).

Jon Light
Jon Light
2 years ago

I wasn’t a philosophy major as an undergraduate. And went to a SLAC that really didn’t have the resources (i.e., research faculty) to understand the terrain of Ph.D. education. For these sorts of cases, the rankings are *massively* helpful.

If I were a philosophy major at UC Boulder, by contrast, it’d be a completely different ballgame. For those sorts of cases, the rankings aren’t as useful, because the students are already at a world-class research program, with really knowledgeable faculty.

And even in the UC Boulder case, it’s not like the rankings are meaningless; they’re starting points to think further about stuff. There’s got to be space between “fetishizing” the rankings and “abolishing” them.

My primary objection to the Gourmet Report is just that they only measure faculty reputations (i.e., fame)–not quality of education, student experience, job placement, or anything else. By contrast, the Jennings et al. approach goes too far in the opposite direction, undervaluing faculty reputation against all sorts of other variables; it ends up being too pluralistic (imo, respectfully and all).

I wonder what it’d look like if we hybridized those approaches, and maybe had a ranking system that took faculty reputations seriously, without taking them to be all that mattered.

Jen Morton
2 years ago

Would some of the concerns here be mitigated if the people involved with the survey weren’t asked to rate the research quality of the faculty doing Y at program X but rather to rate whether X is a good place to pursue graduate work in Y? Often, this will correlate with faculty research reputation, but in some cases, it might not. For example, I can think right away of a few programs that are supposed to be good at Y but which I would not recommend to students because I know that the faculty who do Y at these programs are not good mentors or because I know the program has some serious climate issues. This would factor into my rating of program X as a place to study Y. Incidentally, this is also the kind of social/cultural capital that students at somewhat elite places have access to–gossip about departmental climate and which faculty members make for good mentors.

David Wallace
Reply to  Jen Morton
2 years ago

I think the current PGR instructions actually suggest that you should include this. (I’m not sure whether that’s a good idea, though.)

Matt L
Reply to  Jen Morton
2 years ago

…if the people involved with the survey weren’t asked to rate the research quality of the faculty doing Y at program X but rather to rate whether X is a good place to pursue graduate work in Y?

I think that the difficulty here is that while (for example) I have a pretty good, if imperfect, idea who is an excellent political or legal philosopher, I have a much less good idea if these people are good advisors, or if the “climate” in the departments is good, or if the universities provide adequate funding or support for students, etc. So, I could contribute to a ranking on the first criteria, but not the second, and I suspect that’s so for most people. This is, I assume, why Brian Leiter repeatedly suggests that prospective students talk candidly with current students in departments, so as to get an impression on these matters.

David Wallace
Reply to  Matt L
2 years ago

This is very much my concern.

Tom Hurka
Tom Hurka
Reply to  Jen Morton
2 years ago

When I was on the PGR Advisory Board some years ago a proposal like Jen Morton’s came up, i.e. that assessments of a program also consider quality of graduate mentoring, climate, etc. The proposal was voted down (I myself was in favour) on the ground that assessors can’t have sufficiently reliable information about these matters and would too often be relying on gossip. Reasonable doubts have been raised about how well most assessors can judge a whole department’s research strength. There would be even greater doubts, I would think, about their trying to judge, from a distance, these other factors.

Alexandra Bradner
2 years ago

I love every bit of this—just wanted to say.

Kenny Easwaran
2 years ago

It may be true that English, History, and Political Science don’t have any comparable rankings. But Economics very much does:

https://ideas.repec.org/top/

I believe they even have a ranking of *individuals*, not just institutions.

Michel
Reply to  Kenny Easwaran
2 years ago

And, FWIW, polisci is totally beholden to CHYMPS fetishization (California Berkeley, Harvard, Yale, Michigan, Princeton, Stanford).

Christopher Gauker
Christopher Gauker
Reply to  Kenny Easwaran
2 years ago

This economics page is a revelation. Many rankings, all based on publication and citation data. When this comes to philosophy, as it surely will, the PGR will finally be retired.

Christopher Gauker
Christopher Gauker
2 years ago

The PGR fails to filter out shared biases. Departments that belong to universities that are good overall or which have been strong historically will do better than other departments whose faculty are equally prominent in their fields. I don’t have any data to show that these shared biases are actually distorting the results. But if you just look at the list of faculty in a department and ask yourself, “How many people in this list have I read something by?”, you should realize that do not know enough to produce a meaningful overall rating. The answer Leiter always gave to this objection (in the many debates over this in the past) was that the aggregation of results from many observers with only partial knowledge can produce a reliable ranking. And the answer to that is: Not so, if they really know very little and there are shared biases. The specialty ratings are not subject to this objection, because the raters are better able to give meaningful ratings in their own fields of expertise. A few of you may recognize that I have said this before, and I am sorry to be so boring.

International
2 years ago

As a grad student in the US, I see the issues with PGR-style rankings. But when I was an undergrad at an international institution where analytic philosophy wasn’t (still isn’t) the mainstream, the PGR was my most reliable guide to grad school applications. I don’t know how else I could ever come to know that CUNY, UNC, Arizona, Wisconsin etc. have very strong philosophy departments. (No offense to these universities – it’s just that they are not very well known in my country.)

People in my country who don’t know about the PGR, and those who don’t trust it (for similar reasons offered in this guest post), rely on the QS ranking for philosophy. According to QS, the philosophy department in my undergrad institution ranks higher than MIT, Michigan, CUNY, UNC-Chapel Hill, Brown, and Cornell. And believe me when I say it shouldn’t.

pike
2 years ago

There was a criticism of the PGR years ago which a case against it was made from the point of view of political economy. I don’t recall the author’s name, but it argued essentially that the function of the PGR (intended or not) was for elite graduates to break into the less prestigious departmental ecosystems in order to secure jobs, as the ratio of elite jobs to elite graduates shrunk. Whereas at one time elite circles would hire each other, as it became more crowded, elite graduates needed to secure employment by turning elsewhere, invading smaller departments, who had previously hired from less prestigious state schools (for good reason). The PGR provided purportedly objective data towards disrupting these ecosystems, which tended to serve, unsurprisingly, the elite. The analysis always struck me as the most compelling criticism of the PGR, since it is based on material, political realities (unlike most).

Kelby
Kelby
Reply to  pike
2 years ago

Goodness, I wonder what an avowed Marxist — say, someone who routinely critiques institutions and discourses as mere subterfuge to promote class self-interest — would make of such a criticism?

(I’d never heard this criticism, but it does seem far and away the most compelling one)

David Wallace
Reply to  pike
2 years ago

In what sense is it benefitting ‘elite’ graduates, though? The top five US schools in the 2017 PGR were NYU, Rutgers, Princeton, Michigan, and Pitt. Only Princeton is in the traditional elite, and three are state schools.

(I might be missing something: I’m unsure just how this is supposed to work.)

Kelby
Kelby
2 years ago

A million thanks to Leiter and all involved. It strikes me that the PGR has a different sort of value now than when it began, given that now we have the internet. Back when the PGR started, we didn’t, at least not in a meaningful sense. Of course the Report started on the internet! But there was much, much less info and even less was findable; when I was applying to grad school in 2002, there was no way I could myself have discovered a fraction of the relevant info that I got through the PGR.

By contrast, a would-be grad student’s epistemic problem now is exactly the opposite: viz. too much info online (vastly too much). So the Report’s function now is more curatorial and whatever-the-adjective-of-‘filter’-is. Evidently people disagree about the value of that. But we shouldn’t forget the old days of ignorance and obscurity, when the PGR did a tremendous public service in just getting the info out there.

Devin
Devin
2 years ago

I’m going to echo what many others here have said: while I am no fan of the PGR now, and all rankings are going to be less-than-ideal, the idea that we can do away with rankings ignores the positions that many prospective students are in with regards to their knowledge of contemporary philosophical work and the time and energy they have available for researching programs. Without the PGR I would have had no idea where to even *start* in figuring out where there might be people doing the kind of work I wanted to do (which, incidentally, is not the work I’ve ended up doing).

Sam Duncan
Sam Duncan
2 years ago

So this is mainly in response to Mr. Wallace’s long defense of the PGR but also in response to some other defenses of it here.
1) Re Mr. Wallace’s 3 this is wildly uncharitable. I’d like to see Mr. Pasnau weigh in on this himself, but what he says makes sense if we attribute a much more plausible claim to him: Once your adviser clears a certain bar of competence additional expertise in the field quickly hits a point of diminishing returns so that other factors become much more important than additional competence in the field. That’s hardly absurd. In fact, it strikes me as common sense. And perhaps I’m being overly sensitive about this and uncharitable about this myself but it seems that Mr. Wallace is poo pooing the very idea that one could find someone competent to supervise a dissertation at most community colleges. If so, or if anyone does think that, then that’s just the most ridiculous sort of bias and snobbery. One of my colleagues at my CC wrote the SEP entry on “domination.” Would anyone deny that he’d be competent to supervise a topic on that or indeed most areas of political philosophy? Or that say Gregg Caruso would be competent to supervise a dissertation on free will? Or Richard Brown one on most topics in philosophy of mind?
2) Mr. Wallace’s defense of the PGR seems to come down to the claim that it there’s *some evidence* that it’s a better indicator of placement than is past placement. But the study he cites by him and Jonathan Weisberg isn’t even really about that claim. Rather what it really purports to show that PGR rank is a better indication of placement at a university with a PhD program in philosophy than is past placement. But the vast majority of jobs in philosophy are not at programs with PhD programs. So even if true this defense of the PGR proves very little. (For what it’s worth I actually do think that the PGR probably correlates very strongly with one’s chances of getting a job at a school with a PhD program, but given how small that market is that data is of very limited value for most graduate students.) The mistake here is a pretty common one that a lot of elite academics make, which is to assume that the average job in academia looks like their job. This simply isn’t true. And this leads a lot of philosophers to give advice about how to get a job in philosophy that really amounts to “how to get a job like mine in philosophy”, which is very much counterproductive for most graduate students. Especially since +90% of graduate students in philosophy simply can’t get a job in philosophy like Mr. Wallace’s. (I also get the strong impression that Jason Brennan’s recent book on how to succeed in philosophy, which I’ve seen a lot of graduate students quoting like scripture, makes the same mistake. Correct me if I’m wrong on it making that assumption. I really hope I am, because if I’m not then I do pity those poor graduate students with the rude awakenings most of them are in for.)
3) Mr. Wallace and a number of other commentators here mention as a virtue of the PGR that it makes things easier for graduate students. I’m not sure that that’s a virtue. It seems to me that the decision about where to go to graduate school and if so where ought to be a very hard one. Potential graduate ought to have to think about this very hard and for a long time. They also ought to ask themselves what they want out of graduate school exactly. What kind of career in philosophy do they want? What could they settle for? Is learning about philosophy at a high level itself valuable enough to justify graduate school even if they never get an academic job? What will their lives look like if they fail? Making a decision based on the PGR bypasses all this. To put it in an idiom some people like, making a decision on the basis of the PGR seems like approaching a decision problem by focusing on only the odds of one outcome– chances of getting an elite job– while ignoring all the other costs and benefits and their probabilities. The APDA is at the very least on the right track here with measuring other relevant things like graduate school experience and placement full stop rather than a crude proxy for one specific type of placement. I’d like to see it go further and add things like more data on non-academic employment.
4) It simply won’t do to say that academic hiring is very hierarchical and so it’s good to have information about that hierarchy from the PGR. That overlooks the ways that the PGR contributes to and reinforces the current pecking order. This sort of defense of the PGR is like defending sites like “Great Schools” against charges of racism by saying “Well school quality does track race so let’s let people know about it.” The thing is that Great Schools sets up a dynamic where it’s practically impossible for “bad schools” to ever get better since well off people will never send their kids there due to the rankings and “great” schools do well because those same parents fight to send their children to those schools. This really brings me to what I think is the biggest weakness in Mr. Pasnau’s otherwise excellent post: He doesn’t see how deeply ranking is embedded in the U.S. educational system or how pernicious it is. The PGR is not unique. It’s really just a sort of poor man’s U.S. News Ranking and no doubt if U.S. News ranked philosophy programs we’d have never heard of it. But I digress. At any rate, like Great Schools U.S. News rankings themselves have an incredible impact on reinforcing the hierarchy between different schools and the systems of privilege that come with it. For instance, the U.S. News Rankings gives elite schools like UVA, UMichigan UNC, or Berkeley a huge amount of weight relative to other schools in their systems that they can and do use to their advantage. I remember that in Virginia it seemed like people were going to march on Richmond with pitchforks and torches when UVA fell below Berkeley in the U.S. News Rankings due to budget cuts. No one pays similar attention the community college’s budget cuts leading to us being forced to raise tuition or even lay off talented faculty. Granted the PGR doesn’t do the damage the U.S. News Rankings do but it springs from similar cultural and economic forces and does its own damage.
5) But I strongly suspect the fact the PGR gives the privileged in philosophy more power and renders those lower in the caste system even less powerful is for most of its defenders a feature and not a bug as they say. At any rate, any half way clever undergraduate could give a devastating Marxist analysis of the whole thing. It’s quite damning that philosophers can’t or won’t. In the same vein I’m sure any Freudian worth their salt would have a field day with some philosophers’ obsession with having a venue to measure their prestige and compare it with the prestige of other philosophers. Or their insistence that it’s vitally important that there’s a venue where all potential graduate students can find out about the size of their prestige. If you claim to have any sympathy with the hermeneutics of suspicion or ideology critique but don’t see the glaring problems with the PGR, then honestly I’m at a loss for words.

JDRox
JDRox
Reply to  Sam Duncan
2 years ago

1) As far as I can tell, Wallace is right and Pasnau and Duncan are clearly wrong–it would be a minor miracle if the local community college had someone there who was well-qualified to supervise you *on the topic you want to write on*. (The claim that your local community college contains someone who could do a good job supervising a philosophy dissertation on something or other is much more plausible.) At my surprisingly large “local community college”, there is one person who could do a good job supervising a dissertation on medical ethics, and that’s it. (There are two other faculty members with no publications, and one with a few from more than a decade ago on a historical figure’s views about aesthetics.) According to some quick internet research, the closest community college to you, Sam, is Pellissippi State. There’s not an easily-findable list of philosophers there, but perhaps you could tell us what topics one could find a good supervisor for there. I predict that it’ll be a small minority of topics, but I could be proven wrong!
3) This seems like an odd argument. Like, I guess I think “it should be hard” to remove someone’s heart, but still, some medical device that made removing hearts (for transplantation or whatever) much easier would still be a good thing. The PGR certainly doesn’t make picking a grad school easy, just *easier*. And that seems like a good thing.

Derek Bowman
Derek Bowman
Reply to  JDRox
2 years ago

Dude. Sam already told us what one of his colleagues *at the community college he works at* could supervise a dissertation on.

Last edited 2 years ago by Derek Bowman
JDRox
JDRox
Reply to  Derek Bowman
2 years ago

Hi Derek. I can’t tell exactly what you’re agitated about. On the substantive point, having a colleague that could supervise a dissertation on domination and some related parts of political philosophy is extraordinarily weak evidence that there’s someone at Tidewater that could do a decent job supervising a dissertation on most philosophical topics. About the fact that I didn’t realize Sam taught at Tidewater with McCammon and mistook him for a different Sam Duncan, well, mea culpa, I should have figured that out by following the SEP link. But I don’t think that was entailed by what Sam said: I, at least, feel comfortable talking about my “colleagues” at nearby schools, and that’s how I interpreted Sam’s remark.

David Wallace
Reply to  Sam Duncan
2 years ago

In brief:

1) I don’t think it fundamentally changes my argument if you change the ‘threshold model’ I discuss to a ‘sharply diminishing returns’ model. The basic question is: are there are a relatively small number of people who are way better placed to supervise a dissertation on their area of expertise than others, or are supervisors very widespread, so that you can be supervised pretty effectively more or less anywhere for more or less any project? I argued that the first is true. (That’s perfectly compatible with some of the best supervisors being at community colleges: my objection was to the claim that “You can likely find someone well qualified to supervise your dissertation at the local community college“, not to “It’s possible that someone well qualified to supervise your dissertation is at some community college in North America”.)
2) I think I was explicit (item (8) in my list) that the PGR is not as well suited for people looking for certain teaching-intensive jobs. If you agree that it’s a good tool for people looking for jobs in universities with PhD programs, then the difference between us is relatively minor, especially since I agree that other indicators, including placement data, are relevant for graduates.
3) Yes, deciding to go to grad school is a hard decision, but there’s no point making it gratuitously hard – especially since the information encoded in the PGR is much more available to students at high-status institutions.
4) I’m fairly sanguine about the overall structure of hiring in US HE (it would be a whole other conversation to say why!) But even if I weren’t, I think it’s naive to suppose that the PGR contributes materially to it. Other, externally-constructed, rankings would be used if PGR weren’t available; indeed, they are used in other disciplines, which in my experience are not obviously less hierarchical in their hiring practice.
(There is a defensible view that says that unjust educational systems should be opposed even if the students in your care are collateral damage in that opposition. I respect that view but I don’t share it.)
5) I am reasonably sure I have never in my life claimed to have any sympathy with the hermeneutics of suspicion or ideology critique.

Devin
Devin
Reply to  Sam Duncan
2 years ago

On (3), the issue as I see it isn’t that without something like the PGR figuring out where to go to grad school is hard: it’s that being able to make an informed decision is so much harder for some students (students in small and/or idiosyncratic programs, international students, etc.) than others, for reasons that have nothing to do with their ability to succeed in the field. (I would go so far as to say that for many such students it’s effectively impossible, or at least that the only practically available starting points would be highly misleading.)
(To be clear, I have no desire to defend the PGR as such)

Devin
Devin
Reply to  Devin
2 years ago

I agree that basing one’s decisions on the PGR is clearly a bad strategy, for many reasons, but it’s not my sense that that’s what anyone in the comments is advocating for.

Prospective grad student
2 years ago

I’m a prospective grad student. Here’s what would help me:

  • Specialty rankings based on research area, narrowly individuated. This will help me decide where to apply.
  • Posting students’ dissertations on department websites, provided students are willing. I appreciate that MIT and USC do this for example. It lets me see what kind of research the students are doing and helps me decide whether I would fit in.
  • Transparent placement data on department websites. This means not just posting the first job graduates get, as some departments do, but also what their career looks like after that. This will help me decide where to go once I (hopefully) have offers. I would urge departments to make keeping these up-to-date a priority.
Daniel Weltman
Reply to  Prospective grad student
2 years ago

Many public universities (and some private ones too, I think) post everyone’s dissertation online, or everyone who doesn’t embargo it (and I don’t think philosophers typically embargo their dissertations). So even if it’s not on the department website, you can often find it somewhere else. Here’s mine, for instance! (It has a mistake on page vi, so I would stop reading before you get that far, but it serves to illustrate the point.)

I do think one needs to be cautious about overgeneralizing from this sort of information. What you want to know is whether the place you go and the advisor(s) you pick will be accepting of the kind of philosophy you want to do. Merely looking at other dissertations people have written doesn’t tell you that, because some places have jerks who will not like you even if you are similar to other people that they like, and some places have nice people who will like you even if you are not similar to other people that they like. You can easily cross off excellent places from your “to apply” list by having overly stringent criteria for what a perfect graduate experience looks like when you’re not really in a position to have the relevant data.

Your best bet along these lines, I think, is to write a writing sample that reflects the kind of research you want to do, and include in your statement of purpose any other relevant information about the kind of research you want to do. Schools where you would not fit in will likely reject you, thus saving you the trouble of figuring out if they are a good fit, and schools where you would likely fit in will not reject you (or will reject you for other reasons, but whatever). Once you get accepted at various places, it’s probably needless extra work to look at previous dissertations, since at that point you can just talk directly to current and former students and to potential advisors to see if you’d fit in.

Conrad
2 years ago

I too agree that the perfect should be the enemy of the good.