Philosophy Graduate Programs: Does “Reputation” Track Placement Rates? (guest post)


The following is a guest post* by Carolyn Dicey Jennings (UC Merced), Pablo Contreras Kallens (UC Merced), and Justin Vlasits (Tübingen), in which they look at the extent to which data collected about graduate programs in philosophy by the Academic Placement Data and Analysis project (APDA) correlate with the reputational rankings of the Philosophical Gourmet Report (PGR). It was initially published at the Placement Data site.

The Philosophical Gourmet Report and Placement
by Carolyn Dicey Jennings, Pablo Contreras Kallens, and Justin Vlasits

The Academic Placement Data and Analysis project (henceforth, APDA) has yielded the most complete information on placement for PhD graduates in philosophy to date, with a focus on graduates between 2012 and 2016. All the data are publicly available on the main page of the site, http://placementdata.com/, and many graphics have been posted to a companion site, http://philosophydata.org. Prospective graduate students will likely wonder how this information compares to earlier metrics, such as the Philosophical Gourmet Report (henceforth, PGR), which in the past has been used by students to compare PhD programs in philosophy. In this post we look at the 2006-2008 PGR’s overall ratings for graduate programs in philosophy, and compare these ratings to APDA’s placement rates for these programs. We find both weak and strong correlations between 2006-2008 PGR ratings and placement rates for 2012-2016 graduates. In particular, the 2006-2008 PGR ratings correlate strongly with placement into programs rated by the 2011 PGR, but only weakly with placement into permanent positions overall.1 This post will discuss both the strengths and the limitations of the PGR rankings as a guide to placement.

The PGR has for many years collected ratings of graduate programs from a select group of evaluators. This select group is asked to “evaluate the following programs in terms of faculty quality” using a scale that ranged from 0, “Inadequate for a PhD program,” to 5, “Distinguished” (see the complete instructions here). The mean and median ratings are provided for each program, which are ranked according to the mean ratings (seemingly rounded to tenths and then given equal rank for equal rounded value). In the 2006-2008 report the worldwide top 10 programs are ranked as follows:

Rank School Mean
1 New York University 4.8
2 Oxford University 4.7
2 Rutgers University , New Brunswick 4.7
4 Princeton University 4.4
4 University of Michigan , Ann Arbor 4.4
6 University of Pittsburgh 4.3
7 Stanford University 4.1
8 Harvard University 4.0
8 Massachusetts Institute of Technology 4.0
8 University of California , Los Angeles 4.0

Note that the ranking is in fact according to university or “school,” rather than program. University of Pittsburgh, for example, has two philosophy PhD programs, but these are merged for the purpose of the PGR. Thus, when we compare PGR and APDA we use the same university rating for each philosophy program at that university.

Someone who graduated between 2012 and 2016 is likely to have used this report to choose a graduate program in philosophy. They will have read the following first few sentences under What the Rankings Mean (also present in later reports):

The rankings are primarily measures of faculty quality and reputation. Faculty quality and reputation correlates quite well with job placement, but students are well-advised to make inquiries with individual departments for complete information on this score. (Keep in mind, of course, that recent job placement tells you more about past faculty quality, not current.)

As the first systematic review of placement rates, APDA is now in a position to evaluate these claims for the benefit of future graduate students in philosophy.

In its 2017 report, APDA included 135 graduate programs in philosophy, 92 of which were included in the PGR. (Seven programs rated by the PGR were not included in APDA’s report, due to insufficient publicly-available placement information.) The 2006-2008 PGR says the following about non-rated programs on its main page:

All programs with a mean score of 2.2 or higher are ranked, since based on this and past year results, we have reason to think that no program not included in the survey would have ranked ahead of these programs. Other programs evaluated this year are listed unranked afterwards; there may well have been programs not surveyed this year that would have fared as well.

The PGR says that non-rated programs would have a lower rating than ranked programs, which have a mean rating starting at 2.2, but that non-rated programs could have ratings as high as the range of unranked programs—1.6 to 2.1. For this reason, we marked all non-rated (PGR) but included (APDA) programs as having a mean rating of 1 (“Marginal”), which is midway between 0 and 2.

Using the APDA database and the PGR rankings for 2006-2008 and 2011, we generated the following information for each program, using the most recent placement for each graduate:

  1. the percentage of 2012-2016 graduates from that program placed into permanent academic positions (henceforth, % permanent), where “permanent” is defined as tenure-track or equivalent (e.g. a permanent lectureship); 1182 out of 3164 graduates overall (37%)
  2. the percentage of 2012-2016 graduates placed into permanent academic positions at one of 195 known PhD-granting programs (henceforth, % PhD); 339 out of 3164 graduates overall (11%)
  3. the percentage of 2012-2016 graduates placed into permanent academic positions at 2011 PGR rated programs (henceforth, % PGR); 223 out of 3164 graduates overall (7%)
  4. the percentage of 2012-2016 graduates placed into permanent academic positions at 2011 top-rated PGR programs (henceforth, % Top PGR), where “top-rated” is defined as having a mean rating greater than 3 (“Good”), the overall average PGR rating; 99 out of 3164 graduates overall (3%)

The 2011 PGR was chosen for the hiring program to match as closely as possible the prestige of that program at the time of hiring (but note that the 2006-2008 and 2011 PGR ratings are very strongly correlated: .92). Note that the overall number and percentage of graduates goes down significantly from those in permanent academic positions to those in permanent academic positions at top-rated PGR programs. Only 3% of all graduates end up in positions of the latter type.

The overall correlations between the 2006-2008 PGR ratings and these values are as follows:

  1. A weak correlation with % permanent: .31
  2. A strong correlation with % PhD: .67
  3. A strong correlation with % PGR: .66
  4. A moderate correlation with % Top PGR: .57

Thus, the 2006-2008 PGR ratings seem to have the strongest correlations with the narrower placement measures. It seems likely that programs with higher PGR ratings place more students into permanent positions at PhD programs because both measures successfully track prestige. But it also seems likely that the PGR is itself a driver of prestige, such that the publication of these rankings made it more likely that graduates from highly rated programs would find permanent academic positions at PhD programs. In any case, the correlations themselves do not tell us how the PGR and these placement rates are causally related.

We might compare the above to the program ratings from the 2016 and 2017 APDA surveys. These are the mean ratings from past PhD graduates in response to the question: “How likely would you be to recommend the program from which you obtained your PhD to prospective philosophy students?”, from “Definitely would not recommend” (1) to “Definitely would recommend” (5). The correlations between the APDA program ratings and these values are as follows:

  1. A weak correlation with % permanent: .37
  2. A weak correlation with % PhD: .34
  3. A weak correlation with % PGR: .36
  4. A weak correlation with % Top PGR: .33

From this we can see that the program ratings by past graduates have somewhat stronger correlation with permanent placement rates than the 2006-2008 PGR ratings, but weaker correlations with the narrower placement rates. Given that the APDA ratings were provided in 2016-2017, and so cannot be treated as predictors of placement, these correlations may be an indication of how important different types of placement are to graduates, in terms of rating their graduate program.

We might likewise compare the PGR correlations to correlations between placement rates themselves, year to year. To do this, we first chose two three-year graduation ranges: 2006 to 2008 and 2012 to 2014. We chose the first range to match the 2006-2008 PGR, even while noting that APDA’s data are very incomplete for this time range and so would not normally be reported (APDA has around half as many graduates for these years as for 2011 and later, with the sample biased toward those in permanent academic positions). Since earlier graduates have had much more time to find permanent academic employment, which would limit our ability to distinguish between programs, we chose to exclude permanent placements that occurred more than three years after graduation. (For this reason, we did not look at most recent placement, as above, but first permanent placement.) So for the second range we chose the most recent three-year period for which at least three years have passed: 2012-2014. The correlation between these 2006-2008 and 2012-2014 placement rates is weak: .27. Yet, it is somewhat stronger than that between the 2006-2008 PGR ratings and these 2012-2014 placement rates: .21.

Given the additional noise in the 2006-2008 dataset, due to its being very incomplete, we suspect that the actual correlation between past and present placement rates is higher than what was found above. We decided to look at correlations for two recent three-year periods, 2011-2013 and 2014-2016, again allowing three years for permanent placement. In this case, the correlations were stronger, but again favored past placement rates, with a moderate correlation between 2011-2013 and 2014-2016 placement rates, .41. This can be compared to a weak correlation between 2006-2008 PGR and 2014-2016 placement rates, .24.

Finally, we compared permanent placement rates with earlier values derived by Carolyn Dicey Jennings in a NewAPPS post, prior to the start of the APDA project. In this post, Jennings generated placement rates using estimates for the number of graduates from each program, as her data were not yet complete. The correlation between these permanent placement rates, which covered graduates between 2012 and 2014, and APDA’s permanent placement rates for graduates between 2014 and 2016 is weak, yet higher than that of the PGR: .37 vs. .24. (Correlation between these NewAPPS placement rates and APDA’s placement rates from the more overlapping timeframe of 2011-2013 is strong, as expected: .60.)

Given the above, past permanent placement rates appear so far to be the best predictor of future permanent placement rates. That is, it appears that PGR ratings do not correlate with permanent placement rates as well as past placement information. Whereas the correlations between the 2006-2008 PGR ratings and placement rates for different time ranges were about the same, with a slight decrease for the closer time range (.21 for 2012-2014 vs. .24 for 2014-2016), the correlations between placement rates for different time ranges increased for closer ranges. This could be due to noise in the early ranges, artificially lowering the correlation between those and later ranges, but it could also be due to changes in placement rates over time. This would limit the utility of past placement rates for predicting future placement rates. Yet even the earlier, less complete data correlate at least as well with recent placement as the PGR (.27 vs. .21). The PGR ratings do have moderate to strong correlations with the narrower categories of permanent placement into PhD-granting programs, PGR-rated programs, and top-rated programs. But note that the proportion of total graduates that find such placements is fairly small (11%, 7%, and 3%, respectively).

To go a step beyond correlation, and to assess how well the PGR lines up with different models of placement preference, we constructed three separate sorted lists that we compared with the 2006-2008 PGR ranking. Each sorted list makes use of the above listed placement rates (% permanent, % PhD, % PGR, and % Top PGR) as well as an assumed order of preference. We borrowed the preference-rank translations listed here. Specifically, we constructed three models:

  • The Academic Model (embedded below): a prospective student strongly prefers permanent academic placement, but placement into all other categories is seen as a further bonus. For this model, % permanent is multiplied by 75%, % PhD is multiplied by 17%, % PGR is multiplied by 6%, and % Top PGR is multiplied by 2%. (Since each of these is a subset of the former, the outcome is such that each is treated as a further bonus of decreasing importance.) The programs are then sorted according the highest sum of these values. See the sorted list here.
  • The Research Model: a prospective student strongly prefers placement in a PhD-granting program, but placement into PGR-rated PhD programs would be seen as a bonus, and placement into a top PGR-rated program would be seen as a further bonus. For this model, % PhD is multiplied by 75%, % PGR is multiplied by 17%, % Top PGR is multiplied by 6%, and % Other Permanent (permanent-PhD) is multiplied by 2%. The programs are then sorted according the highest sum of these values. See the sorted list here.
  • The Prestige Model: a prospective student strongly prefers placement in a top PGR-rated program, followed by placement in any PGR-rated program, placement in a PhD-granting program, and then any other permanent placement. For this model, % Top PGR is multiplied by 75%, % Other PGR (PGR-Top PGR) is multiplied by 17%, % Other PhD (PhD-PGR) is multiplied by 6%, and % Other Permanent (permanent-PhD) is multiplied by 2%. The programs are then sorted according the highest sum of these values. See the sorted list here.

The correlations between the PGR and these models are moderate to strong: .64 between the 2006-2008 PGR ratings and expected utilities in the Prestige Model, .68 between the 2006-2008 PGR ratings and expected utilities in the Research Model, and .40 between the 2006-2008 PGR ratings and expected utilities in the Academic Model. Yet, many programs that do well in these models were left out of the 2006-2008 PGR:

  • Twenty-three programs in the top 92 on the Academic Model were left out of the 2006-2008 PGR, listed below with their ranks in parentheses (recall that the 2006-2008 PGR included 99 programs, 92 of which were considered here): University of Cincinnati (6), Baylor University (7), University of Oregon (12), University of Tennessee (15), Pennsylvania State University (21), Villanova University (26), DePaul University (28), Catholic University of America (33), Vanderbilt University (36), University of New Mexico (41), University of Nebraska (45), Fordham University (48), Stony Brook University (54), Duquesne University (60), University at Binghamton (63), University of Georgia (67), University of Oklahoma (75), University of Kansas (76), Tulane University (80), Wayne State University (81), Bowling Green State University (84), Marquette University (87), and University at Buffalo (91).
  • Twenty programs in the top 92 on the Research Model were left out of the 2006-2008 PGR: Pennsylvania State University (31), University at Binghamton (36), University of Nebraska (45), Tulane University (51), University of Oregon (55), Institut Jean Nicod (57), Bowling Green State University (58), Fordham University (61), Baylor University (62), Villanova University (63), University of Kentucky (69), Katholieke Universiteit Leuven (71), DePaul University (76), New School for Social Research (78), Catholic University of America (81), Duquesne University (83), University at Buffalo (84), University of Utah (86), Stony Brook University (88), and Boston College (91).
  • Twenty programs in the top 92 on the Prestige Model were left out of the 2006-2008 PGR: Duquesne University (38), University of Nebraska (44), University of Oregon (45), Baylor University (46), University at Binghamton (47), Pennsylvania State University (55), University of Cincinnati (61), Villanova University (65), DePaul University (70), University of Tennessee (71), Catholic University of America (72), Fordham University (74), Vanderbilt University (78), New School for Social Research (80), University of New Mexico (81), Tulane University (82), Stony Brook University (84), Bowling Green State University (87), Boston College (89), and University of Georgia (92).

We note that many of these programs are “pluralist”—that is, they include continental approaches to philosophy. The PGR has been criticized in the past for failing to adequately represent these areas of philosophy, most famously by Richard Heck:

Partly as a result of the factors just mentioned, the overall rankings in the Report are biased towards certain areas of philosophy at the expense of others. The most famous such bias is that against continental philosophy. I don’t much care for that style of philosophy myself, but it isn’t transparently obvious why Leiter’s oft-expressed and very intense distaste for much of what goes on in certain “continental” departments should be permitted to surface so strongly in the rankings.

While we did not perform a systematic review of these programs, we did look at the correlation between PGR ratings and mention of keywords by past graduates describing those programs in the 2016-2017 APDA surveys. We found that both the 2006-2008 PGR and the 2011 PGR had a moderate positive correlation with the keyword “analytic” (.55 and 58, respectively), but a weak to very weak negative correlation with the keyword “continental” (-.20 and -.14, respectively). It is possible, given the above, that a bias against certain types of philosophy kept the PGR from having a stronger correlation with permanent placement. Further, it is possible that those programs left out of the PGR would perform even better in a model that did not use the PGR as a metric of prestige. We leave further exploration of these possibilities to another post. For now, we simply note this as a further potential limitation for students who wish to use the PGR as a predictor of placement.


1. Correlation coefficient ranges are here described as follows: .00-.19 “very weak”, .20-.39 “weak”, .40-.59 “moderate”, .60-.79 “strong”, .80-1.0 “very strong”. See Evans, J. D. (1996). Straightforward statistics for the behavioral sciences. Pacific Grove, CA: Brooks/Cole Publishing.

Art: Josef Albers, “Formulation: Articulation”

Warwick University MA in Philosophy
Subscribe
Notify of
guest

31 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tim O'Keefe
6 years ago

This is very interesting information; thanks.

What I’d like to see the placement information paired with is graduation data. (Not that I’m expecting CDJ to do that work too!) If program A gets 60% of its graduates permanent employment, and program B 40%, then program A initially seems to be doing a lot better. But the picture becomes a lot muddier if only 30% of the people entering program A end up receiving their degree, while 70% of those entering program B do so.

Craig
Craig
Reply to  Tim O'Keefe
6 years ago

I’m not a stats geek, but I found Brian Leiter’s response, http://leiterreports.typepad.com/blog/2017/10/placement-in-phd-granting-program.html, to be at least plausible. Have I overlooked a response to the NYU-v-Virginia argument somewhere? If Leiter is on to something, then some of the claims about high-PGR rankings and permanent placement seem like they should at least come with asterisks…

Craig
Craig
Reply to  Craig
6 years ago

(Sorry, that was intended to be a reply to the original post.)

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Craig
6 years ago

Hi Craig,
Here is my response to David Chalmers from the previous thread:
“Thanks, Dave!

While it helps to make your point, I don’t prefer the language of “success” and “failure” in this context. I think we have to be careful not to think of and talk about graduates that way, especially in such a tough market, and so we try not to use that kind of language in our reports. We are simply trying to best capture differences that prospective graduate students (and others) care about. These decisions are hard, but here are some reasons to support the current division:
–graduates for the most part prefer a permanent academic position over a postdoctoral position, especially in the long run (see our section on placement fit);
–treating postdocs as different from VAPs would prioritize research positions, since it is more standard for those on a teaching-focused track to go to VAPs and then to teaching-focused jobs (and some positions are called postdocs despite being essentially VAPs);
–when we look at a fairly long time range with a buffer (in this case, 5 years with a one year buffer–we left out 2017), we are looking at graduates who have had a chance to get both a postdoctoral and a permanent position–on average, 4 years (assuming that graduates go on the market for the first time in the year of graduation, 2012 graduates have had 2012, 2013, 2014, 2015, 2016, and 2017 to achieve a permanent position–6 years; 2016 graduates have had 2016 and 2017–2 years); and (least importantly, perhaps)
–I am not as confident in the accuracy of our postdoc data, since we focused in our checks on being sure that the difference between permanent and temporary was accurate, but not on finer-grained differences within those categories.

What I would like to do in the future is to calculate the odds of achieving a permanent job given different types of temporary jobs. Then we would know the extent to which those in postdocs are more likely than VAPs, say, to get a permanent job in the next year. At the moment, I don’t have those numbers. But it would be interesting to know. One thing I have noticed with this project is how surprised people are by the data–so surprised that they sometimes reject the project out of hand, treating APDA like InfoWars (ok, we may not be the NYTimes, but I am no Alex Jones). So it may not be true that postdocs are more likely to get permanent jobs, even if that seems reasonable. I would want to find out based on the data we have.”

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Carolyn Dicey Jennings
6 years ago

As background for those who are not aware of this, it might be worthwhile checking out the many different publications on the so-called postdoctoral crisis (discussed mainly with reference to the sciences, since the humanities typically do not have postdocs):
http://www.sciencemag.org/careers/2017/01/price-doing-postdoc
https://www.theguardian.com/science/head-quarters/2017/aug/10/the-human-cost-of-the-pressures-of-postdoctoral-research
http://www.npr.org/sections/health-shots/2014/09/16/343539024/too-few-university-jobs-for-americas-young-scientists

David Wallace
David Wallace
Reply to  Carolyn Dicey Jennings
6 years ago

But postdocs play a radically different role in the ecology of the humanities. The median science postdoc is a research resource on a more senior person’s lab budget, with quite little independence in their research. The median “postdoc” in philosophy in the UK and US is closer to the junior-research-fellowship model: they have almost complete research freedom and their institution is supporting them as an end in itself. So I would be very surprised indeed if lessons from the sciences were widely applicable here.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Carolyn Dicey Jennings
6 years ago

While there are certainly some differences, these aspects would likely be the same (which is why I chose these articles):

from the NPR article: “For the overwhelming majority of Ph.D. holders who do not become tenured professors, spending time as a postdoc comes at a hefty price. Compared with peers who started working outside academia immediately after earning their degrees, ex-postdocs make lower wages well into their careers, according to a study published today in Nature Biotechnology. On average, they give up about one-fifth of their earning potential in the first 15 years after finishing their doctorates—which, for those who end up in industry, amounts to $239,970.”

from the Guardian article–“The postdoctoral period is one of the most difficult in the academic career ladder…It can end up being an extremely isolating experience, especially if it requires a move to a different city or country. Over the course of five years, Dolan held positions in Cambridge, Dublin, Southampton, Amsterdam and Crete, most of which meant living away from his partner…“There’s this backdrop of short-term contracts, which can often be poorly paid in absolute terms,” he says. “Then this constant moving is a key thing, particularly for those suffering from mental health issues, because basically every couple of years your entire support network can disappear overnight.””

from the NPR article–“”By definition, a postdoc is temporary, mentored training where you are supposed to acquire professional [experience] in order to pursue a career of your own choosing,” Hubbard-Lucey says, “the key word being temporary.” But increasingly these low-paying temporary jobs can stretch on for years. “Many people go on to do many postdocs,” she says.”

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Carolyn Dicey Jennings
6 years ago

(the first quote is actually from the Science article)

David Wallace
David Wallace
Reply to  Carolyn Dicey Jennings
6 years ago

In each case, these are features of *science* postdocs which I am disputing hold for (US/UK) philosophy postdocs. Specifically:

1) the overwhelming majority of postdoc holders do not go on to TT jobs
2) the norm is to hold serial one-year postdocs in different places.

I would be quite surprised if (1) and (2) (especially (1)) hold in philosophy.

Craig
Craig
Reply to  Carolyn Dicey Jennings
6 years ago

Thanks! I had missed that in the prior discussion!

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Tim O'Keefe
6 years ago

This is a good suggestion, and we have been considering how to do this for about the past year. We might be able to add it if we get future funding.

Norm
Norm
6 years ago

UC Santa Cruz got a 3.4 mean PGR rating in 2006-8?

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Norm
6 years ago

No–thank you. I fixed this.

David Wallace
David Wallace
6 years ago

To précis my long comment on the previous APDA thread: if you define a strong department as one which is good at placing its students in strong departments, and finesse the apparent circularity using the linear-algebra techniques in Google’s PageRank algorithm, you find that there is a really good correlation – about 0.75 – between the strong departments according to the 2012-16 APDA and the high-ranked departments according to the 2008 PGR.

I’m also surprised, especially given that this post considers a number of different strategies and preferences that applicants might have, that there is no mention here of the issue of postdocs that was stressed by David Chalmers, myself, and others on the previous thread.

Tim O'Keefe
Reply to  David Wallace
6 years ago

Regarding post-docs, I agree that that’s a problem (counting them as ‘failed to secure permanent academic employment and lowering the placement rate of departments like NYU that out lots of their people into post-docs).

At the same time, counting those people as getting permanent employment obviously seems wrong too, because they haven’t. While no solution is perfect, the following proposal seems reasonable to me, assuming that the data on post-docs can be gathered:

Disregard the post-doc people for the sake of calculating the various “secured permanent position” percentages. Let’s say that NYU has the following stats (I’m making these numbers up based on what is above):

26 graduates total
13 secured permanent academic employment (TT, etc.)
8 are in VAP positions, out of academia, etc.
5 are in post-docs

Right now NYU would count as having a 50% placement rate (13/26), whereas under this proposal, it would be 62% (13/21). As long as the documents made it clear what was going on–maybe adding the parenthetical “(excluding post-doctoral positions)” to the column headings–this change would probably make things more accurately reflect a person’s chances of ultimately securing permanent academic employment, which is the main point of having this information collected and disseminated.

David Wallace
David Wallace
Reply to  Tim O'Keefe
6 years ago

This has the virtue of simplicity, but I think it would still give a misleading impression of your chance of getting permanent employment. It would give an accurate prediction only if students from X university who get postdocs are no more or less likely to get a permanent position than the average for students from X university. I don’t see any particular reason to expect that.

The consensus on the other thread was that the ideal would be to look at people’s placement after some fixed length of time (I like 4 years, but you could perfectly well just do 2,4,6 year lists). That data is difficult to get, I understand (it’s easier to find where people are now than when they got there). There is room for disagreement as to the sensible second-best alternative. Mine, fwiw, would just be to base one’s analysis only on students who graduated at least 4 years ago. Of course that pays a price in how up-to-date the results are.

PLF
PLF
Reply to  David Wallace
6 years ago

Y’all may have missed these updated tables from 10 days ago, “Philosophy PhD Graduates 2012-2016 per Graduating Program and Placement Type (APDA 10/4/17),” which might address the post-doc and placement into PhD program questions:

http://faculty.ucmerced.edu/cjennings3/phildata/index.html

Here, Prof. Jennings sorts placements as such:

– Known PhD Program
– Other Program
– Post-Doc
– Other Temporary Placement
– Nonacademic
– Unknown

PLF
PLF
Reply to  PLF
6 years ago

Sorry! The updates were from 20 days ago (not 10, & around the time the original questions about post-doc and PhD program placement were first raised).

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  David Wallace
6 years ago

Worth noting, David, that your calculations were based on our data for all years (not 2012-2016) and data that includes postdocs, treating them as on a par with permanent positions. Further, it does not consider proportions of graduates, only raw numbers.

David Wallace
David Wallace
Reply to  Carolyn Dicey Jennings
6 years ago

I’ll have a look at the proportionality point.

The other points are quite right, but that’s the only data I have access to, because that’s the nearest to raw data that’s publicly available. As I’ve said before, I’d love to get access to the raw data; until then, I’m in the awkward situation of disagreeing strongly with your statistical methodology in a number of places (while being impressed at the value of the underlying dataset you’ve collected) but having only imperfect resources to work out an alternative.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  David Wallace
6 years ago

Raw data are available in a number of places, including the placementdata.com website. See, for example, the downloadable data at the first three links at philosophydata.org, the first two of which, as PLF helpfully pointed out, include the postdoctoral information you wanted (which is why I posted them).

David Wallace
David Wallace
Reply to  David Wallace
6 years ago

I’m either looking in the wrong place or we’re at cross purposes.

The links I see at philosophydata.org aren’t to raw data; they’re to various summaries of the data, notably the fraction of students placed by each category for each university (which is interesting but doesn’t address the time-to-placement issue). Conversely there is raw data at placementdata.com, but not really in a usable form, short of typing each entry in manually. As I recall, you told me that you’re intending to make the raw data available in a usable form in due course but pressures of time mean it may take a little while.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  David Wallace
6 years ago

I see what you are asking for. What I intended to say last time, but perhaps wasn’t clear, is that we won’t ever release the truly raw data as it is stored in many separate tables and also contains private information. Every csv file requires that we make decisions on how to combine those tables. We have been releasing these spreadsheets and csv files for different data pulls on different dates from the very onset of the project. We will continue to do that. We will likely have broader use csv files released when we finally complete the papers we have been working on for publication, and that is what I was referring to when I said that we have not yet had enough time. If you and others then want to do your own analyses, you can feel free to do so then.

David Wallace
David Wallace
Reply to  Carolyn Dicey Jennings
6 years ago

Got it.

Just to be clear, then, the chunk of data that I would really like to get, and that I think would be helpful for third-party analysis more generally, is just the data on the homepage at placementdata.com – but in a CSV format or similar. Since that’s already publicly available I assume there aren’t data-protection issues with making it available in a different format.

Another David
Another David
Reply to  David Wallace
6 years ago

Hi David Wallace, you’ve made several comments regarding PageRank (some to the effect Google faced a “similar problem” in the 1990’s, some to the effect PageRank is evidence the PGR tracks what it claims for times you mention). Let me know if that’s fair, since I may be unfairly combining different comments of yours to a strawman.

If that’s fair, I think the problem bringing Google and PageRank into this is Google was aiming to be the resource itself, it wasn’t aiming to provide separate evidence for a few websites ranking everyone else. Imagine if CNN and cronies made a search engine, then Google said ‘we’ll work hard on these algos so we can have evidence these sites are actually tracking what they say for 2008’. In that bizarre hypothetical, I think Google would just become the search engine itself, without the obvious problems of CNN and cronies ranking everyone.

My question for you is: even if PageRank plus whatever statistical methods you favor (not the ones you “disagree strongly” with) proved the PGR tracked what it said every single year…even in that extreme case where every single year the PGR was justified by mathematical methods in this way (!)…then why not say those methods are better than the PGR and should replace the PGR? I hope it makes sense what I’m typing.

David Wallace
David Wallace
Reply to  Another David
6 years ago

The PGR answer, which sounds broadly sensible to me, is that placement data is backward-looking. If you’re choosing your university in 2006, you don’t have access to the 2012-16 placement data. (It is an interesting empirical question – the OP touches on it – whether some placement-based ranking based on the 2004-8 placement data would predict a given placement-based ranking in 2012-2016 better than the 2008 PGR.)

As for Google, I’m really just using the anecdote about their task in the late 1990s to give people some informal understanding of how the PageRank algorithm works. The algorithm itself is just a linear-algebra-based way to turn “a strong department is one that places candidates at strong departments” from unhelpfully circular to helpfully recursive.

As it happens, though, I understand (from informal conversations) that indeed, the very success of Google meant that sites increasingly stopped linking directly to one another, so that PageRank isn’t anything like as useful in Google’s search algorithms now as it used to be.

David Wallace
David Wallace
6 years ago

What’s the rationale for the 75%/17%/6%/2% weightings?

David Wallace
David Wallace
Reply to  David Wallace
6 years ago

OK, got it: preference-rank translation.

Alex Levine
Alex Levine
6 years ago

The APDA study is wonderful in many ways, but with respect the issue at hand, the placement of recent graduates into Ph.D.-granting programs, its data are clearly missing a number of verifiable placements from several programs, including three from my own. We post full, up-to-date listings of graduates with current placement status: http://philosophy.usf.edu/graduate/placement.aspx . So do many other programs. Given the breadth of generalizations being made on the basis of what, even for the largest programs in the world, are after all very small numbers, a few missing data points can make a great difference.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Alex Levine
6 years ago

Thank you for letting us know, Alex. You can update the data on our site anytime using your program’s specialized link. Since not all programs update their data with us, we also check all placement pages and other sources for the 135 covered programs. We did check your placement page in April and May, according to our records, but errors are of course possible even with multiple checks. And your page may have been updated since May. I will look into this soon.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Carolyn Dicey Jennings
6 years ago

Hi Alex–I just checked the page myself and I think we already have all of the placements that are on the USF placement site. Can you please send me an email with what you think we are missing? [email protected] (Or if you added the missing placements in between then and now, just disregard this message.)