Questions & Suggestions for the New PGR Editors


Work for the next edition of the Philosophical Gourmet Report (PGR), a reputational ranking of doctoral programs in philosophy, is underway, with recent requests for updates to faculty lists. Since this edition of the PGR will be the first headed by its new editorial team—Berit Brogaard (Miami) and Christopher Pynes (Western Illinois)—it is a good time to seek information about the new editors’ plans and to share constructive advice about the report.

Past discussions of the PGR were often influenced by people’s opinions about the behavior of its creator and longtime editor, Brian Leiter (Chicago). Now that he is no longer an editor of the PGR, I’m hoping that we can have a productive conversation that focuses on the PGR (what it does, how it does it, who does what, its effects, how it fits with other information about philosophy programs, etc.) and not on Leiter. Such focus would make for a more useful discussion and would also save me a lot of time having to moderate comments.

Some of the past discussions are nonetheless worth drawing attention to, for instance, on the PGR’s “technical problems”, the specialty rankingsepistemological hurdles, and some of the effects of the report.

If you have suggestions for the new editors of the PGR, or questions for them, please share them in the comments. I’d be grateful if people refrained from unhelpful comments that merely say “shut down the PGR” (and I’d also be grateful if people refrained from commenting on this request).

We should appreciate the extra time that Professors Brogaard and Pynes are spending, on top of their regular academic responsibilities, to work on the PGR. I’m not sure they will have the time to participate in this discussion thread but they will be able to read your comments and questions. Thanks.

 

Note: In case you are unfamiliar with it, here is the Daily Nous comments policy.

USI Switzerland Philosophy
Subscribe
Notify of
guest

60 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tim Collins
Tim Collins
6 years ago

I have no suggestions for how to change the PGR. However, my advice to Brogaard and Pynes as editors is to ignore outside suggestions about how to change the PGR made in internet fora like the DN and listen only to the PGR advisory board.

The best way to gauge public opinion about PGR-related issues is for someone, say Justin, to set up a poll for DN readers to vote on various proposals. That way Brogaard and Pynes, if they care about public option, can get it, and not simply the opinions of the loudest individuals.

Christopher Gauker
Christopher Gauker
6 years ago

I am sorry to have to say this again. The reviewers doing the overall rankings will usually have inadequate first-hand acquaintance with the work of the people they are evaluating. Please look through a few of the lists of faculty at the departments to be evaluated and ask yourself, “Which of these people have I read something by or heard a talk by?” The usual answer to this objection is that a lot of evaluations by people with only a little knowledge can add up to a reliable measure. The answer to that is that the reviewers share biases toward departments that have traditionally been strong and toward universities that are strong overall, and the PGR methodology does nothing to filter out those shared biases.

Merely Possible Philosopher
Merely Possible Philosopher
Reply to  Christopher Gauker
6 years ago

As a “reputational ranking” aren’t these defects sort of baked in? If a large number of reviewers do not know the members of a department, then the department does not have much of a reputation among the reviewers. Of course, there might be issues with how reviewers are selected and the sort of biases the selection might produce that results in a failure to produce a representative reputational ranking. But it seems that your suggestion, as valuable as it may be, is simply incompatible with the PGR qua reputational ranking. It might be a good criticism of reliance upon reputational rankings, but it doesn’t seem like a suggestion that could improve a reputational ranking. After all, if they follow your advice and then note that they have no familiarity with the members of a department that won’t help to improve that department’s reputation with that person. Person they will refrain from ranking the department, but you’ll still have a ranking that places the “traditionally strong” departments near the top.

Interdisciplinary Grad Student
Interdisciplinary Grad Student
Reply to  Christopher Gauker
6 years ago

Without intending to be contrarian, isn’t bias a critical factor of what is being measured here? As a grad student, I am very interested in a measurement of bias because when I sign up for a department, I am signing up to be associated with that bias for my whole career. Maybe in an ideal world, bias would be irrelevant to career prospects, and I would want an unbiased evaluation of the department in question, but in our world bias seems to me to be highly relevant, stable over time, and worthy of useful measurement.

Brandon
Brandon
6 years ago

My suggestion is simply that the specialty area rankings should be the priority and focus of the PGR, as the “overall” rankings, as interesting as many might find them, aren’t really that helpful IMO. Judgements are made based on far more incomplete and biased information in that case — not out of malice, but out of gaps in knowledge, since not everyone rating departments will be familiar enough with faculty at the rated departments — and thus can give the appearance that some departments might not be a great choice for students, which could be false esp. if the department has a strength in a particular area.

Of course, this is not plausible — I highly doubt the PGR would abandon its overall rankings. That said, at the very least I would like to see an enhanced focus on, and publicity of, the rankings according to specialty.

Brandon
Brandon
Reply to  Brandon
6 years ago

Oh, I also wanted to say that, despite justified criticisms, I do think the PGR can be an important source of information for prospective grad students. (Notice I said *an* important source; i.e., it should not be the be-all-end-all.) So I applaud Profs Brogaard and Pynes for their efforts and look forward to the report’s release.

nicholesuomi
Reply to  Brandon
6 years ago

I’m a bit curious what the purpose of the overall rankings is. Unless systematic philosophy is coming back into style, students are probably going to have some sort of area(s) of focus. Does a high overall somehow affect the standing of the specialties? (I.e. is a high overall rank indicative of something relevant to the specialties? If a dept is in the top five overall, does one assume they’re at least competent in most of the specialties, even if they don’t place well in them?)

I’ve read justifications on the basis of predicting placement a few years out, but afaik placement tracks with faculty a bit better than department. (Now, perhaps there’s clusters, so to speak, of schools that tend to hire each others’ graduates. The placement data on DN a few days (a week, maybe) ago showed permanent placement rankings not at all matching the PGR rankings. If you adjust some factors, it looks closer to the PGR, but those factors are usually adding points for being hired by a highly PGR ranked program. Supporting the clusters idea.

A possibly interesting offshoot, though unlikely for the current iteration, obviously, would be a breakdown by schools or methods. This may also more accurately capture the clusters.

David Wallace
David Wallace
Reply to  nicholesuomi
6 years ago

If you define departmental strength as a measure of how good a department is at placing students at strong departments (and use a little linear algebra to resolve the apparent circularity in that definition) then the APDA placement data *seems* to show a very strong correlation (+0.75) between the 2008 PGR and the departmental strength given by the 2012-2016 placement data. (I say *seems* because I don’t have the full raw data set and the dataset I’m using doesn’t distinguish postdocs from TT hires.)

I don’t think the placement data shows any evidence of hiring clusters: the graph of all hiring links is just one big blob, with the stronger departments (by the above measure) closer to the middle. (I think the claim in the report that the data shows placement networks is a mistake in the interpretation of Google’s network-drawing program, but I haven’t had a chance to write that up properly.)

nicholesuomi
Reply to  David Wallace
6 years ago

I’d be curious to see how the math works out. Presumably if strength is determined by placement into other strong programs, then since the strongest are just those getting the most into others getting the most into…, the strength list should line up more closely to the overall placement rankings.

Of course, whether to use raw numbers or percentages is a decision, and how to deal with schools that don’t have grad students at all (since plenty of people get placed there) is another. (Even then you still have weighting problems. At a certain point you have to decide, say, is 80% placement but to almost all undergrad-only schools and community colleges better or worse than 20% placement into well-funded R1’s?)

David Wallace
David Wallace
Reply to  nicholesuomi
6 years ago

My definition of “strong” uses Google’s PageRank algorithm; I wrote a lengthy comment on it in the previous thread so I won’t repeat it here.

Another David
Another David
Reply to  David Wallace
6 years ago

Hi David, here’s my question for you. If your definition of “strong” resolves the apparent circularity and also helps to show the PGR tracks what it claims, then do you think we should have the algos crunching placement data as the resource itself and no PGR?

Let’s go back to your analogy to the web and to Google’s position in the 1990’s. Google did use these methods you say to solve the problem you say, but where the analogy breaks down is they made those methods into the resource itself, not into a justification for select websites ranking everyone’s importance (or whatever, insert the PGR part of the analogy here).

I hope it makes sense what I’m saying, basically your justification for the PGR tracking what it says, if you are right, would be better than the PGR itself and without some of these ancillary costs to the profession mentioned by others. What do you think?

David Wallace
David Wallace
Reply to  nicholesuomi
6 years ago

(Sorry for double-posting)

As for the relation between generalist and speciality ratings, I ran a regression analysis on this two years ago, and the right weightings can explain about 90% of the generalist scores. (I meant to write this up properly but got distracted by a baby!)

nicholesuomi
Reply to  David Wallace
6 years ago

Not surprising. The top many match a few specialties in particular pretty dang well.

David Wallace
David Wallace
Reply to  nicholesuomi
6 years ago

Pulling the numbers off my spreadsheet, I think the coefficients are:

LEMM: 42%
Science: 10%
Value: 20%
Ancient Philosophy: 6%
Kant: 8%
Other: 13%

Carolyn D Jennings
Carolyn D Jennings
Reply to  nicholesuomi
6 years ago

Hi Nichole,
I am the founder of the project you have been mentioning–Academic Placement Data and Analysis. Here is a set of correlations I ran today to answer your questions. I hope to write a blog post on this soon. For the 135 included programs, I looked at the 2006-2008 PGR mean ratings. For any unranked programs, I gave them a rating of 0. This is because those programs left off are considered by the PGR to be lower or much lower in quality than the ranked programs, and that is also how I think readers of the report interpret the fact that they have been left off. The lowest mean scores included in the PGR are in the 1 range. With that in mind, the correlation between PGR mean rating and the percentage of graduates in permanent positions, for all graduates between 2012 and 2016 is .28. In comparison, the correlation between the mean survey ratings of graduates (see APDA’s infogram) and this permanent placement rate is .37.

So our own survey ratings (again: https://infogram.com/philosophy-phd-programs-graduate-ratings-placement-profiles-and-diversity-profiles-1g4qpzlrokwq21y) are better correlated with permanent placement rate than the PGR.

What about other types of placement rates? I looked at both those placed in programs with any PGR rating and those placed in a program with a PGR rating over 2.9 (the mean of the considered programs). In that case, PGR is correlated with placement rate into PGR rated programs at .61, and above average PGR rated programs at .36 (likely lower just because of how many 0% values there are–more than 95% of graduates are not placed in these programs). In comparison, the APDA survey ratings are correlated with the former at .38 and the latter at .19.

So the PGR ratings have a higher correlation with hiring into PGR-rated programs than our own survey ratings, but note that this might be because graduates care about permanent placement much more than placement into PGR-rated programs. This seems rational, given the numbers. Only 57 of almost 1200 graduates from 2012 to 2016 were placed into above average PGR rated programs.

Interestingly, permanent placement rate is pretty well correlated with PGR placement rate, at .42.

Carolyn D Jennings
Carolyn D Jennings
Reply to  Carolyn D Jennings
6 years ago

In the interest of full disclosure, I want to add that if I treat the programs that are not ranked by the PGR as simply having no known value (rather than the implied lower than other rated programs value, which I think is more accurate), then the correlation between the PGR ratings and permanent placement is better, and even slightly better than APDA’s survey ratings (.39 versus .37). The problem is that this doesn’t capture the many programs excluded by the PGR with above average permanent placement rates. I looked at just those programs left out of the PGR with permanent placement rates that were higher than the average for PGR rated programs, and this includes the following 14 programs:

University of Cincinnati
Baylor University
University of Oregon
University of Tennessee
Villanova University
Pennsylvania State University
DePaul University
Catholic University of America
Vanderbilt University
University of New Mexico
University of Nebraska, Lincoln
Fordham University
Stony Brook University
Duquesne University

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Carolyn D Jennings
6 years ago

Update: please ignore the top rated PGR correlations–those numbers should both be higher. (I didn’t select the full range for that correlation.) I will update them in a blog post.

Sam Duncan
Sam Duncan
6 years ago

Well I’ll refrain from saying that it should be shut down then, but I do think we ought to be very worried about the existence of the Philosophical Gourmet no matter who may be running it. I see a lot of harm it can do and very little good. But even if I’m wrong on the latter we should be worried about the seemingly arbitrary power over others that this gives one group of people. Here we have one small group of philosophers who are not accountable to the other members of the profession in any way and cannot be checked by the rest of us in any official way exercising enormous power over the profession as a whole. Despite its impact on our career prospects and our academic field as a whole we have no say in the rankings and there’s no way most of us can hold the members of the PGR accountable if we are convinced they are misusing their power. So my questions to the editors would be: 1. What if anything are you going to do to give the average member of the philosophical community any say in the PGR report? 2. What if anything are you going to do to make the PGR and its members accountable and to prevent them from abusing their power? Are there any procedures that you are going to put in place or are we going to simply have to trust you to police yourselves? There are huge issues of power here and it’s interesting that academics who are very good at spotting motes everywhere else tend not to see the beam here.

manny
manny
Reply to  Sam Duncan
6 years ago

I think you might have missed the point of the report. It’s not for you. It’s for prospective graduate students. It helps them enormously.

Urstoff
Urstoff
Reply to  manny
6 years ago

Does it? The PGR was around in an early form when I was applying to grad schools, and I didn’t find it particularly helpful.

An undergrad planning on going to grad school will be somewhat familiar with the literature of the specialty in which they are interested; what’s the benefit of the info from the PGR over and above just applying to places with faculty that you find interesting? Sure, the faculty may go elsewhere, but that’s a risk even if you go to a “top” school. PGR doesn’t measure peer quality, how conducive to dialogue the environment is, how stressful or toxic the department is. You only find out about those, if at all, by your invited visit or emailing a few grad students. At best, the PGR provides a list of schools to look into that may not have been an undergraduate’s immediate focus. Academia is enough of a status game as it is; the PGR just seems to reinforce that.

manny
Reply to  Urstoff
6 years ago

” At best, the PGR provides a list of schools to look into that may not have been an undergraduate’s immediate focus.” Err. I took this to be exactly its purpose!

“PGR doesn’t measure peer quality, how conducive to dialogue the environment is, how stressful or toxic the department is. ” Err. Yeh, obviously it doesn’t. Does it claim to?

Simon
Simon
Reply to  Urstoff
6 years ago

I applied to PhD programs in philosophy in the fall of 2009, and I will say it until I destroy my voice that the PGR was immensely helpful. Your description of ‘an undergrad planning on going to grad school’ does not describe me at that stage, and I can’t imagine I am unique. If I had applied only to schools with faculty who had published in my area of interest AND whose publications I was familiar with (even only a tiny bit), I would have applied to a short, highly idiosyncratic list of departments, most of which would have landed me, in the current abysmal academic job market, right into near-total unemployability. I only discovered what became my top-ranked department (happily, the department I ended up in) through the PGR (even though, as an aside, the prevailing philosophical drift of the faculty there is not exactly Brian Leiter’s cup of tea!). There are, to be sure, things not to like about my department that the PGR was not an adequate guide to, but that is a reason not to rely *solely* on the PGR, not a reason not to use it at all.

One testimonial does not prove much, but I think it is necessary when people do not merely criticize the PGR for this or that particular defect but seem to suggest that it serves *no* positive purpose or that its only effects are pernicious. I hope that the editors are afforded an opportunity to improve it by our offering them specific, constructive criticisms grounded in a realistic picture of what the PGR can and can’t do.

nicholesuomi
Reply to  Urstoff
6 years ago

I’m not sure who this hypothetical undergrad is, but even the well-read will be generally subject to the idiosyncrasies of their undergraduate institution. Whatever circle(s) are popular among the faculty there will color their perception of the field. Having input from a much larger group of philosophers irons these out.

now an ass't prof
now an ass't prof
Reply to  Urstoff
6 years ago

I went to a good, philosophically strong SLAC, and I had no clue where to apply to grad school. My advisers were early- and mid-career, research-active faculty–they were in touch with the profession–and they heavily used the PGR to help me decide. I’d be all for a PGR that fixes the (very real) flaws others have pointed out, but if you’re doing a nosecount, add me to the list of people who relied on it (& now recommends it to my own students).

Craig
Craig
Reply to  Urstoff
6 years ago

“An undergrad planning on going to grad school will be somewhat familiar with the literature of the specialty in which they are interested.”

This was false for me. (And I’m sure I was not alone.)

Daniel Kaufman
Reply to  Sam Duncan
6 years ago

The PGR is a pretty crucial resource for people interested in graduate school. I have spoken to enough students from disciplines that have no such resource to be very grateful for it, despite whatever criticism I might have.

I also wonder what exactly it is you expect to do. It is a privately owned website, in which a number of professionals give their well-considered views as to the best programs. That is their prerogative, and I don’t see any legitimate way for you to “demand accountability” or some other such thing. They can do as they like. And programs are free to ignore them.

Seems to me that the people you should be upset with are those who are so status obsessed that they pay far too much attention to the PGR and allow it to govern their actions far too much. As many know, I have chaired and served on many search committees, have been a department head, and the PGR never had one iota of influence on whom we hired or how we conducted departmental business. Our only use for it is to advise our students interested in graduate school to check it out.

All the negative effects you observe are the result of nothing more than adults in philosophy departments behaving in a somewhat adolescent fashion. The problem is them, not the PGR which cannot force anyone to use it or be influenced by it.

Sam Duncan
Sam Duncan
Reply to  Daniel Kaufman
6 years ago

I’m not at all convinced that it is helpful to graduate students. Before I believe that, I’d like to hear from some recent graduates who found it helpful. As it is I hear many second hand reports of such grad students, but then again the same goes for Bigfoot, Nessie, UFOs, and gators in the NYC sewer system. But let’s suppose that we do grant that many graduate students find it helpful. Your and Manny’s response misses my point. The PGR has a huge influence on what happens in philosophy beyond the choices that graduate students make. And it’s not just philosophers or philosophy departments making these decisions; I know of at least one case where admin intervened very harshly and decisively in department politics out of a concern for their school’s Leiter rankings. And occurrences like that are like cockroaches: If you’ve seen it happen once it’s safe to bet it’s happened a bunch more times out of sight. If those involved know that the PGR will have these effects aren’t they at least partially responsible for them? To say otherwise and shirk all the blame to those who make decisions based on the PGR is like saying that a gun manufacturer who claims their wares are intended only for self-defense or sport has no responsibility when someone misuses them.
My opposition to the PGR is based on a more general opposition to rankings. Rankings in general have, I think, an extremely pernicious effect on academia as a whole in so many ways. For one thing, they create a system where schools are so obsessed with their rankings that they’re willing to overlook or even actively cover up misdeeds on the part of star faculty lest they lose that person and suffer a decline in ranking. Beyond that from all the evidence I’ve seen the decisions that schools make in chasing rankings are more likely to be detrimental to education, both graduate and undergraduate, than not. In fact, I’d say that the damage that the obsession with rankings has done higher education is second only to the defunding of higher education. This is much bigger than the PGR and honestly I think that the US News rankings probably do more damage than it ever has. John Warner has a very good piece on his blog about the harm that chasing rankings did ordinary students at his institution (https://www.insidehighered.com/blogs/just-visiting/prestige-isnt-going-save-us) and Cathy Davidson’s new book “The New Education” is excellent on the bad effects of rankings have on higher education in general.
So as far as the PGR goes yes I do think that they should just shut it down, but if they’re not going to do that they need to take some responsibility for the effects their rankings will have. And though you misrepresent what I originally posted when you say that I “demanded accountability” I suppose I do think that we have a right to ask that from them.

nicholesuomi
Reply to  Sam Duncan
6 years ago

On the note of recent grads finding it helpful, I got a much clearer view when I was applying via the PGR. Nobody in my undergrad really did my main area of interest. Only one really did my secondary area, and it’d be absurd to expect her to have such a broad knowledge of all the relevant departments with no recall cues. (I used the PGR to make a list and she helped narrow it with her knowledge of the departments once named.) Most of the really good departments for my area were completely off my radar.

While I agree stuff like USNWR is almost entirely bad, that’s in large part because their methodology is just awful. The PGR at least is clearly measuring one thing and is upfront about what that thing is. Not a magical formula including three metrics that mostly track student wealth.

I’d be concerned about star power getting worse without any aggregate measure of departments, though. Right now a strong dept can take the hit of losing a famous faculty member because the rest, while not as big a draw, so to speak, hold it up. Without these aggregations, the main way people not entrenched in the field will know a department would be by the names they’ve heard. So having big names would be more important.

Sam Duncan
Sam Duncan
Reply to  nicholesuomi
6 years ago

I’ll admit that the PGR is a little better than the US News rankings in that it only measures one thing rather than a stew of factors that are given arbitrary weight. But I don’t know that it tracks what graduate students should probably be interested in. If I had to give advice to a graduate student about what questions s/he should ask about a program I’d say: 1. What’s atmosphere in the department like? 2. What are my odds of getting a job after I finish? 3. Can I study the area of philosophy I’m currently interested in here? And I’d add that I think 3 ought to be a very distant third to the other two since many people will find that they’re interests evolve in graduate school. The PGR speaks only to question 3. It doesn’t touch 1 and while the assumption is that what it does measure will track 2 pretty closely, it seems that from the APDA survey that assumption is just false. And to be frank I’m not sure how much weight I’d give testimony from people currently in grad school about how helpful they find the PGR. As I said I would like to hear from recent graduates. Personally I only felt like I was in a position to evaluate my grad school education a few years after I was out and had some idea of what my career track would be. In my case all the PGR ever did was make me feel horribly anxious and dispirited for going to a school that wasn’t so highly ranked. Honestly, if I had gotten in to one of the higher ranked schools I applied to as a 22 year old I would have almost certainly went, which would have been a horrible mistake since practically every one of them turns out to have much worse placement than UVA does.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Sam Duncan
6 years ago

The placement and ratings information has been updated (yesterday) here: https://infogram.com/philosophy-phd-programs-graduate-ratings-placement-profiles-and-diversity-profiles-1g4qpzlrokwq21y

I happen to think public comments are useful…we plan to write program profiles with these (e.g. https://infogram.com/phd-program-highlight-baylor-university-1g502y99n190pjd)

See phildata.org or philosophydata.org for these links and others.

nicholesuomi
Reply to  Sam Duncan
6 years ago

I haven’t seen it compiled in awhile, but 2 seems like an easy enough fix. Collect and publish data on placement. I found such data collected a few years ago and put together in such a way to see immediate and five years out placement in various positions (TT, adjunct, postdoc, SLAC, R1, etc.). Certainly useful information.

Do you think there’s some way to measure 1? I suppose the satisfaction survey shared on DN last week would be one such measure. I think that is so person-dependent to be too slippery to measure short of just talking to people in the departments to get a feel for them. If department A is very competitive and department B is more relaxed, which one is better very much depends on the environment the individual flourishes in. On prospective visits I went to a few departments where the students all seemed very happy with their program, but I could tell I would have rather different feelings.

Having 2 and 3 seems to me to be enough to get a list of places to apply prepared. I imagine after the application process (and into the decision process) the PGR becomes less useful.

I do suppose there’s another benefit in gauging one’s chances. With the PGR in hand I was able to pretty accurately guess which places I would get in and which would be long shots. Given the desirability of getting in in the first year of applications, having some “safe” options is desireable.

Alex
Alex
Reply to  Sam Duncan
6 years ago

Well, others have said the same, and I’ll add my voice: I’m a current graduate student, and when I was applying to graduate schools I found the PGR to be a valuable resource.

Sam Duncan
Sam Duncan
Reply to  Alex
6 years ago

Justin,
Those are all good points. I guess I feel that most of us don’t really weigh the data we have in the right way when we’re making decisions about grad school. I certainly didn’t, but I was lucky enough that it worked out for me. I worry that the PGR and US News rankings give potential students information that isn’t very good grounds in making such decisions, and that both have outsized influences over other sources of information that are likely a lot more reliable. They seem to drown out other sources of information in most people’s decision making. I guess maybe there’s an element of paternalism in that, but like I said I don’t feel I was in a great position to make these decisions as a 22 year old.

Michel X.
Michel X.
6 years ago

One thing I’d like to understand better is why the evaluator pools for some specialties are so small. I understand, of course, that some subfields really are very small, or especially poorly represented at research institutions, and that the number of eligible responses from prospective evaluators might be pretty low as a result.

But some of the subfields with small numbers of evaluators really aren’t that small. Philosophy of Art is the case I know best (7 evaluators for a 600ish-member American association), but I think similar concerns might attach to Action, Applied Ethics, Biology, History of Analytic, Mathematics, and Physics. When the pool of evaluators is small, a single rogue assessment can really skew the results (especially since evaluators can’t rank the programs from which they graduated/with which they are affiliated).

So… is it by design (and if so, why), or is it just the way it all shakes out?

jonathan (not identical to justin) weinberg
jonathan (not identical to justin) weinberg
Reply to  Michel X.
6 years ago

I totally agree that this is something that they can & should improve on, over the last couple of instances of the Report. It may in fact be one of the most important places that they can improve, especially because those area rankings are so influential.

I wonder if one thing potentially to do here would be to identify the top journals in each such area, and then every single author who publishes in that journal over some time period (say, 5 years?) gets invited to participate in the specialty rankings for that area. (It shouldn’t be _just_ those people, but by including those recently-published authors in each area, it would help produce a broader evaluation bench.)

jonathan (not identical to justin) weinberg
jonathan (not identical to justin) weinberg
6 years ago

Picking up but perhaps somewhat respinning Gauker’s comment above, I do think that the new management at the Report might want to provide a really substantive section of the webpage where they enumerate various possible biases that might plausibly have arisen, with a discussion of what measures they took to counter them. This doesn’t need to be done in a defensive way at all, I think. And in my view it will be totally fine if for some such biases, the answer is: there’s not much that can be done, and consumers of the report should keep that in mind.

Now, I don’t share Gauker pessimism as to the efficacy of averaging here, in terms of ameliorating (but not eliminating) a halo bias. But I am very sympathetic to his point, and think it is one that the new editors can take on board in a constructive manner. It would be worth observing that particular kinds of departments are going to be incorrectly underrated here, namely, those whose strengths are not as widely & generally visible as those of other departments. I happen to think that Gauker’s own former department at Cincinnati may be just such a department — it’s a department I have tremendous respect for, and I have tended to think that it should come out rather more highly rated than it does, in the last few Reports. And this may be because some of its strengths (especially in very empirically-oriented philosophy of mind & biology) are simply not ones that are as visible to many philosophers as compared to departments whose main areas of strength are things like metaphysics, epistemology, or philosophy of language. This is consonant (I think) with MPP’s point above, to the effect that this sort of error just seems to be a cost of this kind of reputational measure, as a decent but not unbiased indicator. (I know that some folks think it is not even a decent indicator, which I acknowledge can be debated.)

Finally, one must keep in mind that the comparison here is often not some more idealized set of evaluations, but rather the state of play where no systematic survey is done — that is, where there is basically nothing in our practices to help counterbalance halo-type factors whatsoever.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
6 years ago

I would like to see a larger set of philosophers surveyed. Minimally, it would be great to see all faculty at philosophy PhD programs included. I think the current sampling process has serious issues, as has been discussed at length elsewhere (see https://en.wikipedia.org/wiki/Snowball_sampling).

Prospective Grad Student
Prospective Grad Student
6 years ago

As somebody applying to grad school who has pluralistic interests, I really hope they the new PGR puts more weight on subfields outside of analytic LEMM + Ethical Theory. There should be more evaluators for fields like applied ethics, feminist philosophy, non-western philosophy, and continental philosophy (and more diversity in the kind of continental philosophy that is weighed, i.e. more than just analytic writing on German phenomenology).

Bharath Vallabha
6 years ago

There are four very different issues at play:

1) Does PGR help some/many grad students?
2) Does PGR gratify the egos and status anxieties of evaluators and philosophers at ranked departments?
3) Does PGR help ranked departments promote themselves to administrators?
4) Does PGR entrench intellectual and social biases?

The answer to all four is “Yes”. Yes to (1) is good; more power to the undergrads/grad students PGR helped and helps.

Yes to (2) is what it is. Everyone has egos, and everyone wants them gratified; no judgment there. What is the cost to the discipline of the ego gratification of whether X department is 3 or 6 or 15, or ranked or unranked? But this is a personal question mainly.

Yes to (3) is good – sort of. But the help to the ranked departments is coming at what cost to the overall ecology of philosophy departments in the country and world (where vast majority of departments are unranked, and so unseen through the PGR lens)?

Yes to (4) is clearly bad. In what ways is PGR intellectually limited, and perpetuates socially unjust and unfair structures?

Leiter as editor, as I remember, addressed (3) and (4) in only the most defensive and dismissive ways. Hopefully, the new editors – as well as the evaluators – will be more open to critical reflection about (3) and (4), and have that critical reflection more openly.

Bharath Vallabha
Reply to  Bharath Vallabha
6 years ago

PGR is a reputational survey. Some people think X departments are the best – the cream of the crop. Others like to use this info to become part of the best. Ok so far.

What is invovled in this claim of the “best”: In particular, what are the responsibilities of the best to the profession? What do the best philosophers think about the job problem, not just at the best departments, but the non ranked departments? Is caring about the nonranked departments part of the responsibility of being the best? Or what about the issues of diversity, or the worry that PGR reenforces certain power structures over others, without explicitly and intellectually defending those structures? If so, why are the best philosophers allowing that?

The PGR culture in the past wanted the public recognition of being the best, without publically taking on much of the responsibilities of being the best. If NYU is the best department, then NYU has the greatest responsibility for the well being of the profession. Same for all the ranked departments. Would be great if the new iteration of PGR made explicit how the best departments also take steps to meet their responsibilities.

Otherwise, the idea of being the best rings pretty hollow.

Tim O'Keefe
6 years ago

Here is one suggestion.
I think it would be nice for the PGR to build an interface that allows applicants to easily generate “individualized overall research rankings” based on the applicants’ own interests plus the speciality rankings. Most applicants might have one or two areas that are really crucial to them–their main areas of current interest, which they think are most likely to become a possible dissertation area and future AOS. Then there are other areas that aren’t as crucial, but where it would be a real plus if the department was strong in it–areas a person would like to take classes in, but more likely to be an AOC than AOS, although who knows? Finally, of the remaining areas, some might be a default of “hey, I guess all else being equal, it’d be better if a program had a strength in X,” while others are areas an applicant would want to have disregarded in compiling an individualized ranking.
Then take the rounded mean scores in the specialized rankings, weight them depending on the applicants’ categorization of each speciality area, and generate a score for each program.
For instance, imagine an applicant who puts “Philosophy of Action” and “Ethics” as their “primary AOIs.” The rounded means for those two areas would be multiplied by 10 for the overall individualized ranking.
Then this applicant puts “Ancient Philosophy,” “Philosophy of Cognitive Science,” and “Philosophy of Mind” as their “secondary AOIs.” Those rounded means are multiplied by 4.
A fair number of other areas, including most of the “core,” are put by this applicant as the default “strength in this area is a plus,” like Metaphysics, Metaethics, etc. These means are multiplied by 2.
Finally, this applicants puts Philosophy of Biology, American Pragmatism, and a bunch of other areas in the “disregard” pile, and those rounded means are not factored into the ranking.
I don’t think this would all that difficult to implement, although it goes beyond my own tech skills. All would you need is a menu listing all of the different ranked speciality areas, with a checkbox for each that allows you to choose exactly one of the four categories, and a ‘submit’ button, plus some under the hood stuff that draws on a database of the PhD programs and their rounded mean scores in the speciality rankings.
Applicants already can do something like this themselves by eyeballing the specialty rankings, but this, I think, would be a nice supplement to that to get a plausible list as a starting point in little time. (It might also draw a person’s attention to programs they otherwise would not have noticed.) And, of course, applicants could play around with their preferences to see how it would change the list.

Andrew Sepielli
Reply to  Tim O'Keefe
6 years ago

That’s a nice idea. I’d also endorse a second kind of overall ranking that is an aggregate of the rankings in the various subfields.

Shelley Tremain
Shelley Tremain
6 years ago

Hi Carolyn,
for some reason, I couldn’t reply above. I was wondering if you would mind explaining why you and your team define diversity in terms of gender and race only. Would you mind explaining your use of this understanding of the term?

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Shelley Tremain
6 years ago

Hi Shelley,
It isn’t so much that we define diversity that way, as that we think the term “diversity” can helpfully capture the different evidence gathered on gender and race/ethnicity. This is an important point, and I agree with you that diversity goes beyond gender and race/ethnicity.

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Carolyn Dicey Jennings
6 years ago

(Also, when this happens to me I just “reply” to a comment further up the chain and it often puts my comment underneath the one I mean to be responding to.)

Shelley Tremain
Shelley Tremain
Reply to  Carolyn Dicey Jennings
6 years ago

Carolyn,

do you think your study, with its use of the term ‘diversity,’ reproduces the idea that analysis of diversity is achieved through analysis of (binary) gender and race only?

N
N
Reply to  Shelley Tremain
6 years ago

Shelley, you’re misinterpreting Carolyn. She just explained that they did not mean to analyze diversity, but rather that the term diversity captured what the variables they were looking at. She also acknowledged that, if she had to analyze diversity, she would include other variables. So I think you both agree that other dimensions of diversity could and perhaps should be taken into account.

Carolyn D Jennings
Carolyn D Jennings
Reply to  Shelley Tremain
6 years ago

I take your point, and will think about this.

Mohan Matthen
Reply to  Carolyn Dicey Jennings
6 years ago

To reply to a comment, hover over the thumbs up sign and a reply button appears.

PDM
PDM
6 years ago

I’d like to second the various comments about increasing the size/quality of evaluator pools for the specialty rankings. For instance, the list of evaluators for general philosophy of science is quite strange. It includes several scholars who are not philosophers of science. Roger Ariew is a scholar of modern philosophy, David Christensen is an epistemologist whom I suspect wouldn’t really think of himself as a philosopher of science, and Jonathan Schaffer engages in bizarre non-scientific, a priori armchair metaphysics (grounding, anyone?) and has no real history of research in philosophy of science.

If the PGR is likely to be of value to anybody, it is in its capacity to provide specialty rankings; but a necessary condition on having good specialty rankings is that the evaluators for particular specialities are actually active researchers in that sub-area.

Chris
Chris
Reply to  PDM
6 years ago

I’m also good with increasing the size and quality of the specialty rankings – (I wonder if the reason some areas have only a few people evaluating is due to the fact that these people often do double duty in the overall rankings, and BL wanted the number of people from a given area doing the overall rankings to be somewhat reflective of the size of their subdiscipline? I don’t know).

However, I just want to mention that Roger A does HPS and was editor of Perspectives in Science for over a decade, David Christensen lists philosophy of science as well as epistemology as his areas on his research page, and Jonathan Schaffer has multiple essays in Philosophy of Science (including an award winning one on causation) and in The British Journal for the Philosophy of Science, and writes regularly on causation and laws (and wrote the SEP article on The Metaphysics of Causation (many, if not most, of the works cited in this article, would I think, count as philosophy of science by most philosophers of science).

Of course, it is standard that people will disagree about what counts as the good or best stuff in a discipline – all the more reason to increase the size of the pool of evaluators.

joe
joe
6 years ago

I think several points would help:
1) enlarging the number of evaluators and, perhaps, do not always have the same people and/or have more currently on top of the field people (the lists do contains sometimes somewhat unusual choices)
2) let evaluators evaluate ALL grad programs. It might be a rumour but apparently you do not get to evaluate all, but a chosen subset of grad programs and this has been one way in which the PGR was rigged to begin with. Cut it to the top as usual after evaluations are done.
3)Include more international evaluators – there are jobs outside of the English speaking world and it is too a factor to be considered
4) in subfields, include English programs in non-English world – say, Ancinet phil – Oslo or Munich.

Jon
Jon
6 years ago

Maybe invite them to guest blog here? I’d think you probably have, though. Seems weird that we’re just passively (/aggressively) talking about two people who are probably reading this, but not contributing. Or if they aren’t looking for this sort of unsolicited feedback, it’s less than 100% obvious to me that we should be offering it.

My advice to them would be to not get too distracted by the haters. I think the profession is a bit too consumed by all of this (e.g., this post) and everyone should just chill out a bit and let them have a go at it.

Jonathan Jenkins Ichikawa
Jonathan Jenkins Ichikawa
6 years ago

I think that the PGR has not always been straightforward and honest about what it is attempting to measure. Whether the department would be a good place to study? Quality of faculty research outputs? The ‘quality’ of the department in some other sense? Department reputation for quality? Faculty reputation for research quality? Something else?

The editors should decide which if any of these things they are attempting to measure, and make their metric super clear to both readers and evaluators. There is a big difference between e.g. how famous and prestigious some department’s faculty are, and whether a prospective student will have a positive or useful experience there.

They should also make sure they have a reliable method to measure what they’re trying to measure. If, for example, they intend to measure research quality of faculty output via surveys, they should probably only consider responses from evaluators who have read some of the relevant faculty member’s work.

US-PGR
US-PGR
6 years ago

The PGR doesn’t publish its list of evaluators or their affiliations.

However, it was created by one editor based at a US university, and it will now have two editors, both based at US universities. In the last edition, moreover, of the 48 members of its advisory board, 41 were based at US universities (2 of whom have a joint affiliation each outside the US: 1 with an Australasian and 1 with a European university). The remaining 7 were from the UK (3), Australasia (1) and Canada (3). If the advisory board does not change, that means that the evaluators this time will be chosen by an 86% of people affiliated with a US university (41 in the advisory board + 2 editors).

Compare this to the list of academic units evaluated (a list itself chosen by editors and advisory board): 94 in total, with only 59 (63%) based in the US.

Michel X.
Michel X.
Reply to  US-PGR
6 years ago

FWIW, the list of evaluators and their affiliations is at http://www.philosophicalgourmet.com/reportdesc.asp

US-PGR
US-PGR
Reply to  Michel X.
6 years ago

That’s worth very much, Michael X.; thanks! I stand corrected, sorry.

From a quick look, the list of evaluators shows a little less US-centricity. Of 303 evaluators, 209 had a single US affiliation (though there *may* be mistakes as for instance David Chalmers is listed as being only at ANU). That’s 69% of evaluators. Of the remaining 209, 1 has a double, US-UK affilation, and 37 did their PhDs in the US (though some evaluators don’t have a PhD, so “Phd [sic] school” is a misleading label). Considering those as well, the percentage of PGR evaluators who were either currently affiliated, or did their PhDs, in the US is a still-whopping 82%.

May I suggest the new PGR editors to reduce US-centricity a tiny bit this time?