A Reputational Survey of Philosophy Programs Plotted Against Program Placement Data


To what extent does getting one’s PhD in philosophy from a program that does well in a reputational survey increase one’s chances of finding a permanent academic position?

That is the question that Spencer Hey (Research Scientist at Brigham and Women’s Hospital and Co-Director of Research Ethics at the Center for Bioethics at Harvard Medical School) recently took up, and he has now presented the results of his inquiry in an interactive graph.

He plotted along one axis the rankings PhD programs received in the latest Philosophical Gourmet Report (PGR), a controversial reputational survey of the faculty at the programs, and along the other axis data on the programs’ placing of their PhDs in permanent jobs from Academic Placement Data and Analysis (APDA).

Here’s a snapshot of the graph and its key:

from Aero Data Lab (Spencer Hey)

On the interactive version of the graph, mousing over the nodes will reveal the names of the plotted programs as well as the relevant data.

Professor Hey observes:

Roughly speaking, for every 1 point increase in a program’s mean PGR score, there is a 10% increase in its placement rate for recent graduates. However, that trend is really only a small part of the story here…

For example, the 60-40% placement range is populated by programs from across the PGR-score spectrum. This shows that getting into a top-scoring program is by no means a slam dunk for a future job in academia. It also shows that many lower-scoring programs do just as well as higher-scoring programs at placing their graduates—and some even better. UC Riverside, Irvine, and University of Virginia really stand out as “overperforming” based on their PGR score. Notably, NYU, which has been the top-ranked PGR program for several years, is very middle-of-the-pack in terms of placement.

In general, I think the programs falling into the upper left and lower right quadrants raise the most interesting questions. What are some of these lower-scoring programs doing (or what areas do they specialize in) that helps them to place their graduates so well? And conversely: What aren’t some of these top-scoring programs doing? Obviously, getting your graduates jobs in academia isn’t the only measure of a program, but the PGR survey is ostensibly supposed to be tracking the ability of the program to train successful academic philosophers. So it seems to me that some of the “underperformers” here should raise an eyebrow—and students applying to graduate school would do well to probe the APDA data more closely (and ask their advisors lots of questions) before placing too much stock in a program’s PGR rank.

More here.

For an earlier look at the relation between PGR rank and placement, see this post.


Related: “The PGR’s Technical Problems“; “The Specialty Rankings“; “What Do PGR Evaluators Need To Know?“; “Broader Effects of the PGR“; “Leiter to Step Down from PGR / The New Consensus

guest
30 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Siddharth Muthukrishnan
Siddharth Muthukrishnan
2 years ago

This study doesn’t capture the quality of the university at which the graduates are placed. Sure, Virginia and Irvine have a higher placement percentage than NYU, but it’s important also to look at the universities at which NYU places its graduates. Many of them are top research universities: e.g., Stanford, Princeton, UCLA, Penn. This is not true for Virginia and Irvine. (Irvine LPS does place at top universities, but then again, Irvine LPS is rated highly in the PGR for its specialty, so LPS’s placement data doesn’t quite work to undermine the PGR.)

Not all permanent positions are created equal. Graduate students might be rational in choosing NYU over (say) Virginia even if the probability of them getting a permanent job out of NYU is lower, because the expected value of going to NYU might be higher—i.e., because the sort of job you might get if you go to NYU might be more valuable.Report

David Wallace
David Wallace
Reply to  Siddharth Muthukrishnan
2 years ago

A year ago I tried using Google’s PageRank algorithm to extract a measure of departmental quality (on one axis) from placement graph centrality as measured by APDA data. Details in the comments on this Leiter Reports thread: https://leiterreports.typepad.com/blog/2017/10/placement-in-phd-granting-program.html

It correlates pretty well (0.75) with PGR, for what it’s worth.Report

David Wallace
David Wallace
Reply to  Siddharth Muthukrishnan
2 years ago

Oh, and NYU is at the top of that ranking; the top 10 departments, in order, are NYU, Columbia, Princeton, Yale, Leuven, UC Berkeley, Oxford, Rutgers, Pittsburgh (HPS), Toronto.Report

Avalonian
Reply to  Siddharth Muthukrishnan
2 years ago

Since there might be prospective grads reading this, there is a reply to Siddharth that really ought to be on this thread. Bluntly put, there’s a heck of a lot hiding in that last “might”.

Yes, a TT job at Stanford is easier than one at some small teaching university: maybe you start at $25k more, you teach one less course per semester, you get a research budget, TAs, more leave, etc. But these differences are comparatively minor when compared to the difference between the job at a small university and no job at all. The no-job-at-all scenario, by comparison, is not only much more common than the job-at-Stanford scenario, it is extremely bad, socially, psychologically, financially. If you gave me time, right now, I could come up with a list of 20 very smart recent graduates of Ivy league and other shiny programs who would walk over a bed of hot coals for one of those “lesser” jobs this year. But they won’t get them, in part because those ‘lesser’ schools think they are a flight risk, and in part because the schools they are graduating from have terrible placement systems.

Checking the PGR is a necessary condition–but it is not even close to a sufficient condition–for being reliably informed about where you should go to grad school.Report

Sam Duncan
Sam Duncan
Reply to  Avalonian
2 years ago

I’d add that it’s not clear that research focused jobs are better than are teaching focused ones. Leave Stanford aside; most jobs at R1s aren’t that much better paid than teaching focused jobs to start with. And when you consider the publication requirements of most R1 jobs I’m not sure the workloads are lighter. I work at a CC and I sure as heck don’t work the amount that most academics claim to even with the grading. (I’m not sure I buy that 60 hours a week figure, but if it’s accurate…. hoo boy is my job good). I guess one could argue that writing lot of papers that will mostly go entirely unread is somehow more meaningful than teaching or otherwise more objectively valuable. But that doesn’t seem terribly plausible to me.Report

Kenny Easwaran
Reply to  Avalonian
2 years ago

I think it’s wrong to interpret “no academic placement” as “no job at all”. “No job at all” would be an awful outcome. But “no academic placement” could for some people mean “working as a local bartender” and for other people could mean “working as an ontologist at a software company”. I don’t know exactly what the professional satisfaction of these particular career paths are (or any of the other alternatives to academic work), but surely some of them are just as academically, intellectually, and financially satisfying as academic work, while others aren’t.Report

Jonathan Weisberg
Jonathan Weisberg
Reply to  Siddharth Muthukrishnan
2 years ago

The analysis using PageRank is very cool, thanks David!
I wonder if there’s a small glitch though, which the inclusion of Leuven tips us off to. If you look at their placements in the data set you used, 21 of them (two thirds) are at Leuven.
In fact Leuven is one of two striking outliers in this regard. The other being Oxford, which has 24 placements at Oxford—the largest number of placements at one place in the data set.
I think this ends up affecting Leuven much more significantly than Oxford, though. When I removed all “self-placements” (which you don’t have to of course, some self-placements are perfectly legitimate for the purposes of this exercise), I got the following as the top 10 PageRanked programs: NYU, Columbia, Princeton, Yale, UC Berkeley, Pitt (HPS), Rutgers, Harvard, Toronto, and Oxford.
That’s a very similar ranking, the main difference being that Leuven falls out of the top 10, down to 32. (Harvard also comes in to the top 10, and Oxford places a few spots lower.)
One caveat: this was my first encounter with the PageRank algorithm and I may not have implemented it correctly. My implementation does come very close to reproducing your ranking, David, when “self-placements” are kept in. But I didn’t get exactly the same result, and it’s possible that indicates a flaw in my code.Report

David Wallace
David Wallace
Reply to  Jonathan Weisberg
2 years ago

Or my code, tbh – I did this ages ago and haven’t rechecked it for this exercise.

Those Leuven self-placements are odd, and I don’t understand the university well enough to comment on it. Oxford genuinely does hire a fair number of its own alumni, especially when post-docs are figured in.Report

Jonathan Weisberg
Jonathan Weisberg
Reply to  David Wallace
2 years ago

Same here: I have a much better sense of what self-placements mean for Oxford than for Leuven.

I think it’d be interesting to see how PageRank at one time predicts PageRank 5-10 years later. It might be as good a predictor as PGR score. Indeed, it’d be interesting to see a model that uses both—I wonder how redundant they are.

If I can get the data for this, I may be able to run these numbers, now that I have most of the code already in place.Report

David Wallace
David Wallace
Reply to  Jonathan Weisberg
2 years ago

The other thing to have a look at (if you’re interested) is how well more recent versions of the PGR correlate with the PageRank data. I intentionally looked at the 2008-9 PGR, which is the one that would have been seen by students who got jobs in the period covered by the APDA data; as it happens, if those students wanted a high-PageRank department, looking at that PGR would have been a good method. You’d predict that the correlations would be poorer if you looked at the 2017 PGR, because that reflects faculty changes that are too recent to have helped the APDA cohort.Report

Jonathan Weisberg
Jonathan Weisberg
Reply to  David Wallace
2 years ago

Yes, thanks, I’ll try to look at that too.Report

Iowlygrad
Iowlygrad
Reply to  David Wallace
2 years ago

Would be interesting to see trends in different departments/areas of interest as well. Leiter will sometimes make some predictions on his blog, although maybe this will make the market too much like a horse race.Report

non-leiterific grad student
non-leiterific grad student
2 years ago

If Leiter is right about the predictive value of the PGR—that PGR scores predict placement *years down the road*—then we should be looking at 2012-16 placement against 2011 (or perhaps 2009) PGR scores rather than 2017 PGR scores. If memory serves me right, the folks behind the ACPA discussed this relationship, though a snazzy graph would help.Report

EHZ
EHZ
Reply to  non-leiterific grad student
2 years ago

Also, for “years down the road” you might want to look at placement after more than just 1-5 years since graduation.Report

Caligula's Goat
Caligula's Goat
Reply to  EHZ
2 years ago

I sincerely hope not, though I fear that you might be right about this. What I learn about philosophy (now 6+ years on post Ph D.) is that we need as much information as possible about programs and their placement.

Leiter’s survey is, for me and my purposes, mostly garbage unless it can be connected with things like placement but put that issue aside. If EHZ is right (and this is not intended as an attack on you EHZ), then I think it’s absolutely pivotal that graduate students understand that if they want to go to graduate school to get an entry level tenure-stream job at a research-oriented institution that people like Leiter would like, they are essentially locking themselves away from financial security (and housing stability) for 10-15 years (grad school + these postdocs).

It would also be good for those of who just want to do research and teaching somewhere with reasonable pay and job security. If it turns out that some programs are more prestigious but higher risk (for what *some* might view as a higher reward) while other programs are safer bets for employment but which might land you a tenure-track job that Leiter’s buds might poo poo, then I think that’s worth knowing too. Let a thousand flowers bloom and whatnot – so long as they know what kind of fields they’re blooming in. Report

Anna
Anna
2 years ago

If I recall correctly – this might be misleading because the top programs often place their students in prestigious post docs for 2-5 years after graduation, and they THEN get a permanent academic job. If the placement data is only 3 years out, it wouldn’t account for this.

As for what the unranked programs who place well are doing….that should be obvious: they are training great teachers, with lots of teaching experience, that then get teaching jobs. This is not to say many of those great teachers are not also solid researchers, but that is likely not what explains their success. Report

Caligula's Goat
Caligula's Goat
Reply to  Anna
2 years ago

I can’t think of any tenure-stream job at any place that isn’t a community college where research isn’t *at least* as heavily weighted as teaching quality. I’ve been on so many job search committees that I can honestly say that the people we reject are far better researchers and teachers than I ever was when I began applying (and I’m only 6 years removed from Ph D.). Data-point: I work at a well regarded but still very teaching focused SLAC. On paper, research matters here at least as much as teaching and in practice matters more so (nobody gets tenure for good teaching unless they’re good researchers, great research and mediocre teaching can get tenure). Report

Daniel Kaufman
Reply to  Caligula's Goat
2 years ago

I can’t think of any tenure-stream job at any place that isn’t a community college where research isn’t *at least* as heavily weighted as teaching quality.
= = =
It isn’t at ours, and we are the second largest public university in the state. Report

Caligula's Goat
Caligula's Goat
Reply to  Daniel Kaufman
2 years ago

Dan, can one get tenure at your institution without at least 3-4 peer reviewed publications? I’m in California and am familiar with the tenure standards at both UC and CSU systems along with a smattering of private universities around. CSU standards (which sound like they might be similar, institutionally, to your own) might be lower than UC but still require 3-4 publications to be competitive for tenure application. I’m wiling to admit I’m wrong about this as a general claim though think that this bolsters my main point above (that we need as much information as possible about programs and placement so that graduate students can make the best decisions possible about where to go given the kinds of jobs they want to be best prepared for). Report

Daniel Kaufman
Reply to  Caligula's Goat
2 years ago

We certainly do require research. But it is a less important variable in tenure/promotion decisions — as well as in hiring decisions — than teaching.Report

Anna
Anna
Reply to  Caligula's Goat
2 years ago

I’m surprised anyone would be unaware that many, many, teaching jobs take teaching more seriously than research. And I am not talking community colleges. All you have to do is look at the people who get hired (yes..men and women) against the people on the market, and this should be obvious. This isn’t to say they don’t care about research, but teaching clearly comes first. Report

Andrei
Andrei
2 years ago

Is there similar data available for other fields? I’m thinking Cognitive Science.Report

Daniel
Daniel
2 years ago

Very cool! Let me just note that this graph slightly underrates MIT’s placement, in part because one dissertation listed on the website isn’t a PhD thesis.

Here is a correct, anonymous list of MIT graduates who defended in the relevant period (2012-16):

9 Permanent (TT or equivalent) jobs: Mt Holyoke, Yale, MIT, Sydney, Cambridge, UCSD, UMass Amherst, UCL, Vassar
6 Temporary: Mt Holyoke, Oxford (All Souls), Cambridge JRF, Northeastern, Hebrew University, URI
4 Out of Academia: software, teaching math, Yale Law, finance

That’s 9/19 = 47% with permanent jobs, rather than 8/20 = 40%.Report

Tomi
Tomi
Reply to  Daniel
2 years ago

Speaking of the MIT placement record that Daniel mentioned: some of the temporary positions there are fantastic. Case in point, an All Souls postdoc (or at least, the one in question) lasts for five years, and comes with a huge degree of freedom and plenty of perks. It’s an incredible job to get, significantly better than most permanent positions in my opinion, but on the graph above it’s just another “temporary position”, and so doesn’t count for anything. Similarly, a JRF at Cambridge is arguably a lot better than many permanent positions, because your prospects are pretty good going forward – at the risk of infringing on the anonymity of the list a little, the person who got that JRF now has a permanent job at Oxford. And there are plenty of other postdoctoral positions which seem to me better than a lot of permanent jobs. Top PGR departments tend to do pretty well at getting these.

A related point is that the graph systematically sells UK institutions short – it’s significantly less normal to go straight into a permanent job in the UK, at least in part because PhDs are quite a bit shorter. But that doesn’t mean that placement in UK departments is worse, all else (PGR) equal, as the graph suggests. As a prospective grad student, I’d be much more interested in where PhDs end up after 5-10 years than whether they immediately get a permanent position. And I certainly wouldn’t regard a program that places around 40 percent of its PhDs in permanent positions and almost all of the rest in good postdoc positions as having a bad placement record.Report

RJM
RJM
2 years ago

It does strike me that a lot of heat and debate could be avoided if we stopped framing all these things as “rankings” and just made use of the copious amounts of data they provide us with. The data, and the presentation of it, can be very useful even if the “ranking” one might derive from it needs to be treated with a pinch of salt. (I would say that pretty much any ranking of these sorts of things should be treated with a large dose of salt, but sadly many of my colleagues seem to disagree. Sadly, philosophers are no less petty than any other group). If we do it this way, we don’t need to engage in endless debates about how to “measure” placement quality, as if there is any way of doing it that isn’t at least in part subjective, and we can instead spend our time correcting mistakes and omissions in the data itself. Report

Jon Light
Jon Light
2 years ago

I’m worried this thread is semi-conflating “best program” with “best placement”, which strike me as separable. We don’t care about PhD programs *only* because of their propensity to place, but also because of how well they train people—which might not track placement (e.g., because hiring areas might be idiosyncratic, skewed toward applied ethics, comparatively overrepresented at small, Christian colleges, etc.). In other words, it’s a meaningful questiom to ask where I’d want to go if I just wanted to pursue philosophy because I loved it, not because I’m tryig to hack hiring prospects. To be sure, hiring matters to most of us—and should matter to those underaking a PhD—but it’s still only part of the whole enterprise.Report

Sam Duncan
Sam Duncan
Reply to  Jon Light
2 years ago

Well but one also needs to be careful to distinguish between best program in the sense of providing the best education and best program in the sense of having the most star power in the faculty. The Leiter rankings measure the latter, and claim that it correlates strongly with the former. There are all sorts of reasons to think that’s not true. For one thing, it’s quite possible that one’s reputation does not track the value of one’s work. We can think of a lot of cases where it took some time for a philosopher’s work to be recognized as valuable. Hume and Kant both provide good examples of this. And on the other side of the coin work that was once taken to be valuable is often later recognized as pretty close to worthless. Not too long ago any reputational survey would have no doubt told us that much of the really great work was being done in ordinary language philosophy, and to say the least most of that work has not aged well at all. More importantly, one can write good work while being an incredibly poor teacher and mentor. I remember a professor from my MA program whose work I quite respect who was pretty notable in this respect. His method of teaching was to just read typed notes for about 15-20 minutes at a stretch and then look up nervously ask, “Any questions?” and then go back to reading as quickly as possible (and this was in a graduate class). I can also think of several other figures whose work I respect that would be terrible mentors in the sense that they expect all advisees to be disciples and brook little to no dissent from their own party line. I don’t think working under them would be very good for really developing as a philosopher or doing good philosophy.Report

Russ Shafer-Landau
Russ Shafer-Landau
2 years ago

I’m Placement Director at UW-Madison and was puzzled by the discrepancy between our results in this survey and my impression of our record over the years. So this prompted me to give a more careful look at our record. I think a ten-year snapshot might be more representative than any given four-year stretch. I also think it important to omit from these figures the number of those PhDs who decided for one reason or another not to pursue a career in philosophy. Over the past decade, we have graduated 62 PhDs. 14 of them did not try their hand at the philosophy job market. Of the 48 who did, 33 have landed permanent (tenure-track or equivalent) positions. That’s 70%. Further, of the 15 who have not (yet) landed such positions, 5 (9%) currently hold excellent post-docs (at Pittburgh, ANU, the NIH, Australian Catholic U, and Penn State). 3 (6%) have left the profession after not succeeding on the market. And the 7 who remain either have visiting assistant professorships or long-term but non-permanent jobs teaching philosophy at colleges or universities.
Report

Spencer Hey
Spencer Hey
2 years ago

Thanks again to Justin for sharing this work– and to everyone here for taking the time to think about this and generating an interesting discussion. In case it is helpful, I thought I might just say a few more things about how I think about this analysis:

(1) I share many of the concerns expressed about biases/limitations in the data, either in the “predictor” (PGR score) or the “outcome” (APDA placement). Part of the point (in my view) of graphing data is to try and surface (or discover) those biases or limitations. And so I completely agree with the assertion that (a) this graph should be taken with a grain of salt; and (b) having additional or different datasets would be helpful.

(2) However, you don’t always have all the datasets you might like to have. So I used data that was readily available. And I think the hypothesis here is still interesting and a reasonable place to start: “Is a program’s most recent PGR score associated with the permanent placement rate for its recent graduates?” I think there are some intuitive (albeit incredibly complex) social mechanisms that might explain such an association (if it exists), so to my mind there is face validity to the test. This also reflects how many students (in my experience) think about the PGR. But I appreciate that there are many other hypotheses we should consider; and other data that we would need to test them.

(3) What I hope this work may contribute to (similar to what I think the APDA is up to) is highlighting the importance/benefits of aggregating and sharing more information about professional placement or whatever other measures/outcomes we think are important for students to think about when they are considering graduate school. I am also happy to update the graph, or offer a suite of visualizations, as we get more/better data–subject, of course, to limitations on my time. But if others have and are willing to share the data, I would love to be a part of helping to explore, present, and communicate it.Report

Georgi
2 years ago

This graph only plots PhD programmes in the Gourmet report. So it doesn’t include the University of Tennessee. The University of Tennessee has a high placement score:

http://dailynous.com/2016/04/15/philosophy-placement-data-and-analysis-an-update/

Perhaps Baylor, Villanova, and Tennessee are all outliers: Relatively large academic placement rates without a Gourmet ranking. Report