The Specialty Rankings


Selections from the 2014 Philosophical Gourmet Report (PGR), a controversial ranking of the reputations of PhD-granting philosophy departments, have been appearing at Leiter Reports recently. Parts of the following specialty rankings, along with the names of those philosophers who did them, have been released: philosophy of physics, Kant, metaethics, 19th Century Continental, ancient philosophy, philosophy of mind, ethics, philosophy of action, and political philosophy.

While I have seen some discussion of the specialty rankings in relatively private contexts on social media, regarding both methodology and results, and while Mitchell Aboulafia has been analyzing them at Up@Night, I thought it might be useful to open up a place here at Daily Nous for people to discuss the rankings.

Subscribe
Notify of
guest

74 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
anonymous prof
anonymous prof
9 years ago

For the most part, the results are what I would expect them to look like. The only exception, to my mind, is the philosophy of action rankings. Perhaps it is because there are only 8 evaluators but I find the results very strange. This may be because I think of the boundaries of philosophy of action differently than the evaluators: I have in mind people like Kieran Setiya, Sarah Paul, Sergio Tennenbaum, Michael Thompson, etc. Given this, the evaluators and the results don’t track the discipline as I understand it.

anon faculty member
anon faculty member
9 years ago

I share anonymous’s prof’s concern. But my sense is that there is a fundamental ambiguity in the professional notion of ‘philosophy of action’: it can track either what anon. prof. has in mind (the work done by Setiya etc.) or what the evaluators seemed to have primarily in mind – namely, the work done by people like Mele, Fischer, Pereboom etc. As far as I can tell, there is little to no overlap between these two ways of construing ‘philosophy of action’.

Randolph Clarke
Randolph Clarke
9 years ago

Action theory includes work by figures in both of the groups described in the first two comments here. What would be the rationale for drawing the boundaries in such a way that either group is excluded?

Simon Gurofsky
Simon Gurofsky
9 years ago

Speaking as a curious outsider: can someone draw a quick philosophical distinction between ‘philosophy of action’ qua Setiya, Tenenbaum, etc., and qua Mele, Fischer, etc.? Many thanks!

anon faculty member
anon faculty member
9 years ago

Speaking only for myself, I would have thought that the overarching questions addressed by Setiya, Tenenbaum etc. are: is there any special, non-inferential sense in which we can know what we are doing; what is the proper conclusion of practical reasoning, etc.; whereas, people like Fischer are primarily concerned with the issue of whether freedom and/or responsibility are compatible with determinism or indeterminism. Similarly, the first kind of ‘philosophy of action’ typically takes Anscombe as a major point of reference, whereas the Anscombian tradition usually plays little to no role in the kind of contemporary analytic metaphysics that does ‘philosophy of action’ in the second sense.
I may be entirely misperceiving this divide, if so others will correct me and I’m happy to defer.

anonymous prof
anonymous prof
9 years ago

Perhaps the problem is that the category ‘Ethics’ is described as: “(including normative ethics, moral psychology, and practical reasoning, but not applied or metaethics).” So a lot of what I would think of as philosophy of action would fall under this label because of the mention of practical reasoning. But this seems to be a very strange way, to me, to divide things up. The people doing work on practical reasoning, at least the ones I know, would align themselves more with philosophy of action than with normative ethics. And it also strikes me that the normative ethics element in the description of ‘ethics’ is what is largely driving the rankings in that specialization. So this leaves programs that are very strong in practical reasoning with no appropriate home. They don’t show up in the action rankings and they don’t really show up in the ethics rankings either. I think this is unfortunate and gives the wrong impression to students who are interested in these issues. To take just one example, if someone were interested in issues surrounding practical reason, I would recommend the University of Pittsburgh as a great place to do it (less so now that Setiya has left, but still very strong). But Pitt doesn’t show up on either list.

anonymous prof
anonymous prof
9 years ago

If I were to propose a solution, I would suggest that there be three categories:
(i) normative ethics
(ii) moral psychology and practical reasoning
(iii) philosophy of action

p
p
9 years ago

I think the divisions are quite artificial to begin with, but it is clear that PGR takes philosophy of action more as the kind of metaphysics that is concerned with freedom of will/responsibility than the kind that is concerned with practical reasoning (btw. it’s just funny to call some departments strong in practical reasoning…). But i think this reflects the job market – the jobs advertised as philosophy of action seem to go towards the freedom of will side, whereas the other side seems to have more traction with the ethics/moral psychology jobs.

B
B
9 years ago

Very small groups of people, overwhelmingly from places high in the overall rankings. I’m having a hard time taking it seriously, even if it comes up with the right results. Phil physics is a case in point. 10 evaluators!

Justin Coates
Justin Coates
9 years ago

I strongly suspect that the “top 9” does not exhaust all the schools that will be listed in the final report. The last specialty ranking includes Pitt (though not Chicago, which seems to me to be an oversight given the fact that one could learn a great deal about the philosophy of action from Leiter, Nussbaum, Volger, Callard, Ford, or Laurence) along with many other deserving places as well (like Michigan, UNC, MIT, etc.). If this is right, we should wait till we have the full list (if there is a longer list) before evaluating it. If it’s anything like the specialty rankings from the previous report, I can see how one could reasonably argue about some of the groupings, but with at most a few exceptions (e.g., Chicago, Northwestern (Ebels-Duggan and White), Tennessee (Coffman, Archer, Palmer, and Garthoff all do good work on issues related to practical reasoning and action)), I don’t think there were many oversights concerning who was and who wasn’t included.

Of course, if this is the full list, then I agree, it overlooks those working on issues more directly related to practical reasoning and those working from a neo-Anscombian framework. Of course, if Leiter is giving us full lists and not just “teasers” then the ethics list, which only includes 11 places, seems even more incomplete.

anon vap
anon vap
9 years ago

fwiw, I’d draw the distinction between:

(a) philosophy about what distinguishes (intentional, attributable) action from mere behavior, and what implications the answer has for ethics, metaethics, philosophy of mind, etc.
(b) philosophy about free will and moral responsibility, and especially what they require metaphysically.

These areas strike me as basically distinct (though some people–e.g. Helen Steward–argue against this). Apparently pace the specialty rankers, it seems way more natural to me to just call (b) “moral responsibility” or something and limit “philosophy of action” to (a), which captures the historical roots of the subdiscipline (as growing out of work by Anscombe and Davidson) and which includes not just Setiya/Thompson et al but lots of work by Mele, Clarke, and other more metaphysics-friendly types. But it doesn’t, and shouldn’t, include work on things like Frankfurt cases or the Consequence argument.

anonymous prof
anonymous prof
9 years ago

I think that one thing this discussion reveals is how the choices made by the editor(s) of the PGR have profound consequences on which work is and isn’t valued. Yes, the specialty divisions are going to be artificial. But we can see that with slightly different divisions, various programs would fare far better than they do. If someone works on trolly problems or free will, this is going to matter more to a department’s showing up in specialty rankings than if someone is working out the differences between Davidson and Anscombe’s philosophies of action.

Martin Shuster
9 years ago

I don’t even know how to react to the set of comments above me.

Mitchell Aboulafia has done an invaluable service to the profession by showing how *stunningly* flawed the methodology of the PGR is (flawed far beyond what I even imagined, as someone who was already quite wary of the PGR), and people are discussing philosophy of action as the alleged outlier (and, yes, the PGR–as in many other areas–doesn’t properly track the *profession* — and, no, the missing pieces aren’t just the camps of Mele/Fischer/Pereboom and Thompson/Vogler/Setiya, but also includes important work by people influenced by European philosophical figures like Merleau-Ponty or Heidegger).

Talk about missing the forest for the trees.

Deborah Achtenberg
Deborah Achtenberg
9 years ago

Only four out of thirty-four evaluators in political philosophy were women? That’s stunning. Especially in that field.

anonymous prof
anonymous prof
9 years ago

I think it is worthwhile to take a particular case as an example of the more general problems with the PGR. Indeed, there are other approaches to philosophy of action that are not represented in the twofold division I have been discussing. But I think there is value to seeing that, even when we restrict our attention to analytic philosophy, the divisions make a mess of things. This is just one problem. There are problems about the number of evaluators and the selection of evaluators and many others besides. But forests are made of trees and I think we can attend to many issues at once.

Margaret Atherton
Margaret Atherton
9 years ago

Only one woman out of sixteen on the meta ethics panel? Startling on both counts.

ABC
ABC
9 years ago

Two things called my attention: (a) the number of evaluators this time tends to be smaller than the number of evaluators in the previous PGR; (b) at least four people who signed the September statement acted as evaluators.

Kristina Meshelski
Kristina Meshelski
9 years ago

I was also very concerned about the gender ratio of evaluators in political philosophy.

anonymous prof
anonymous prof
9 years ago

And in comes early modern…1 out of 18 women. This is just sad.

p
p
9 years ago

My guess is perhaps many of those who declined to evaluate were women this year. There were around 6 or 7 for political last time, around 5 or so for early modern. If this is indeed result of proportionally more women than men declining to evaluate, it is not doubt the result of the anti-PGR campaign fueled by Leiter’s blog/personal behavior and probably something BB as the new editor (or BL for that matter) could not do much about. Provided that they invited them to evaluate of course. But then, one should not complain about this as somehow intrinsic PGR feature as opposed to contingent matter connected to recent “upheavals”.

Roberta Millstein
9 years ago

If you look at the post by Mitchell Aboulafia that Justin linked to above, you can see the drop in the number of evaluators more precisely.

Here is the link again: http://upnight.com/2014/11/28/not-with-a-bang-but-a-whimper-participation-decline-in-the-philosophical-gourmet-report/

p
p
9 years ago

But then this is a problem – if people refuse to participate, then the results are of course worse since the more participants, the better the results (up to a point, of course, so all things being equal). So the problem that PGR has or might have (depending on one’s opinion), are exacerbated by the very campaign of those people who complain about it.

Plouffe
Plouffe
9 years ago

Four out of 34 political philosophy evaluators were women? I wonder how many were Republicans.

anonymous prof
anonymous prof
9 years ago

P, let’s say that your explanation is correct. The following points still seem true:
(i) As it stands, this round of the PGR not only reflects, but perpetuates the inequalities present in our discipline.
(ii) The fact that BL’s presence as editor has made women disproportionately unwilling to participate in the evaluations is reason enough for him to be unsuitable as an editor of the PGR.

I know several of the evaluators signed up for the gendered conference campaign. I wonder how many of them now regret agreeing to serve as evaluators?

Tim Kenyon
Tim Kenyon
9 years ago

Like Martin Shuster, I incline to think the real story ought to be the methodological fiasco that this ranking comprises. Or, perhaps putting the same thing differently, the story ought to be the vast chasm between what the PGR actually does (for example, launder and enfranchise dubious perceptions of prestige under opaque conditions, and skew a job market) and what it’s supposed to do (chiefly, inform prospective students of where they will get a good graduate education in philosophy).

There’s something almost charmingly transparent about the occurrence, as an afterthought near the end of the long-ish interpretive guide section of the PGR, the extra throwaway line to students: “Before choosing any program, of course, make sure that the faculty there are committed to training graduate students.” Why, yes! That is very fine advice for students to follow on their own time and with their own resources, considering that nothing in the PGR rankings themselves, specialty or overall, indicates this sort of information. It employs exactly zero plausible methods for measuring it. The ambiguity and vagueness of “Philosophy of Action” is a genuine methodological issue, but by comparison seems a bit picayune — as if the ranking might be meaningful, but for such minor uncertainties.

Professor Noëlle McAfee wrote in a fairly widely-read correspondence: “the PGR is damaging to the profession because people treat it like it is actually meaningful.” I agree with that pithy and forceful summary, and hope, soon, to see less evidence of its truth.

p
p
9 years ago

anonymous: I do not know how to evaluate your claim (i). I think the inequalities are probably created and perpetuated by other things than PGR and I am really unsure how to measure PGR’s influence on any of it. PGR was I think becoming more and more inclusive of women in its editions up until now and so I think that perhaps the refusal is in fact a disservice to the community too. If I know my participation can help, and I refuse because I dislike the editor, do I not contribute to the inequality? (ii)it was not BL’s presence in and of itself, but his behavior as described by others who explicitly connected it to his role in PGR and actively campaigned against him and the PGR at the same time. Despite what you might think, it is not a generally received opinion (outside of certain blogs and groups) that this connection was appropriate. In particular, there was a move to re-describe Leiter’s obnoxious behavior in terms of sexism and this, as a rumor, succeeded to some extent even if one might have very, very serious doubts about its validity. So yes, perhaps, BL should not serve, but then he is stepping down, isn’t he?

Margaret Atherton
Margaret Atherton
9 years ago

Actually 2 out of 18. Still pretty sad.

Tom Hurka
9 years ago

Tim Kenyon: I was on the PGR Advisory Board from its inception until this year, and near the beginning one issue the Board discussed was whether evaluators should be instructed to take into account the factors you mention, e.g. the atmosphere in a department for grad students, how available faculty are to grad students, etc. I supported the move, as did some others, for the reason you give: these factors are vital for the quality of a grad school education. Another group opposed it, arguing that evaluators don’t know enough about these factors to assess them accurately in distant departments but will probably rely on hearsay, and that group won the day. I don’t think their argument was unreasonable. It would be very hard to come up with an accurate assessment of atmosphere etc. at a whole range of philosophy programs. What then to do? One possibility is to give up any ranking whatever, because the most important thing can’t be assessed. The other is to rank what you can rank, and leave the other factors to prospective students’ campus visits, questioning of current students, etc. The PGR took the second route, and I don’t think that was unreasonable. Remember that the alternative isn’t no ranking or a perfect ranking; it’s the idiosyncratic recommendations of individual professors to their individual students, both about faculty quality and about atmosphere.

Tim Kenyon
Tim Kenyon
Reply to  Tom Hurka
9 years ago

Thanks, Tom. Naturally how one sees the PGR and its influence will depend not on the distinction between no ranking, a perfect ranking, and some ranking, however poor, but rather on the details: the complexity of the phenomena to be measured, the plausibility of the methods that purportedly measure them, the presentation of the results and their fidelity to the original phenomena, and the outcomes associated with uptake of the exercise in the discipline.

Mitchell Aboulafia
9 years ago

Just a quick comment regarding P’s observations about the participation of women in the PGR. In Political Philosophy, which has been under discussion, this time there were four women philosophers out of thirty-four, 11.8%. In 2011 by my count, there were six women out of 47, 12.8%. Long story short, at least in terms of Political Philosophy, the problem of the under-representation of women didn’t start with women backing out this year.

Eric Winsberg
9 years ago

The philosophy of physics rankings exhibit some strange features as well. For the most part they highlight the facts most folks would agree on. But one thing stands out as remarkably strange. In the 2011 rankings, the University of Sydney was unranked. Since then, the only change I am aware of is that Huw Price _left_ the department. But in the interim the department moved up into a ranked position. As far as I can tell, Dean Rickles (who is actually in HPS–but no matter) is now the only philosopher of Physics in the University–and he is rather junior compared to Price. If this doesn’t highlight the possible idiosyncratic effects that small perturbations in the set of raters can have on the overall rankings, I don’t know what would.

another anon prof
another anon prof
9 years ago

You don’t have the facts right. Huw Price was on the Cambridge faculty list in the 2011 rankings: http://www.philosophicalgourmet.com/overall.asp. Sydney has several faculty working on philosophy of time and math at the border of philosophy of physics.

Ed Kazarian
Ed Kazarian
9 years ago

Given ‘p’s constant reiteration of the claim that non-participation is the source of the ‘harm’ here, it feels important to remind folks that many of the methodological flaws that are being identified in the PGR have nothing to do with sampling rates. None of those would, as a result, be ameliorated by higher participation rates.

Fritz Warfield
9 years ago

I am an evaluator for the PGR for philosophy of action. I think of the field as wide and inclusive. I am intimately familiar with the work of those you mention (I have used some of it in seminars and in readings courses) and agree that much of it falls clearly under the heading of philosophy of action. What is your evidence that you and I “think of the boundaries” of the field differently?

Eric Winsberg
9 years ago

I see that I should have consulted the list. My understanding was that Price left Sydney in 2012 (see http://leiterreports.typepad.com/blog/2011/04/price-from-sydney-to-cambridge.html) but I guess he was removed from the sydney list pre-emptively. Still, that leaves the question of why Sydney went from being unrated in 2011 to recommended in 2014 (since I dont think any of faculty who work at the border are new). In fact, it even better supports the claim that the change is due to a very small and irrelevant perturbation, since we can now guess what it was (though admittedly it makes the change less dramatic and surprising.)

Matt
9 years ago

“In fact, it even better supports the claim that the change is due to a very small and irrelevant perturbation,”
Or, the younger people became better established, having published more stuff, and their earlier stuff having become better known. I don’t know the people or the field, but surely that’s a possibility here, too, and one we might well expect.

“who is actually in HPS–but no matter”
It’s explicitly said that people in HPS programs are included in the philosophy rankings when the departments work closely together, so “no matter” indeed.

Bharath Vallabha
9 years ago

Tom Hurka, how did you come to be on the advisory board of PGR? If I am a prospective graduate student, this would be a relevant piece of information. Certainly the alternative isn’t no rankings or perfect rankings. But it is also not individual professors’ idiosyncratic opinions or a disembodied, group voice which doesn’t make explicit the criteria by which its members are chosen. If the PGR board or evaluators are chosen in an informal, idiosyncratic way that might not capture the diversity of the profession, that surely affects how a student might understand the results. So if you can explain the process or mechanism by which you came to be on the board, that would be very helpful.

David Velleman
9 years ago

All this talk of bias in the PGR evaluator panels presupposes that there is something for the survey to be biased about, a reality that it can misrepresent. The same presupposition underlies criticism of the PGR’s “methodology” — as if there is something that the survey can mis-measure. This presupposition is false. As Jessica Wilson has pointed out in her excellent FaceBook post, philosophy encompasses too many differences of approach, interest, method, and tradition to allow for a uni-dimensional measure of “quality”.

Any student who is considering how to spend the next 6-7 years of his or her life should be willing to spend a couple of days with departmental websites, PhilPapers, and Google Scholar, looking at faculty publications, following trails of citations, getting a feel for the self-presentation of various programs and checking their placement records. This sort of research will uncover not just who the recognized experts are but what style of philosophy they do and how their values are reflected in their departments’ curricula and self-descriptions. Compared with this sort of research, a reputational survey is nothing but a crutch — and a broken crutch, at that.

As many other have noted in various discussions on the Web, those who participate in the PGR are kidding themselves if they think they are performing a service for prospective graduate students. They a producing a distraction from the research that prospective students ought to do and easily can do with the resources of the Internet.

MItchell Aboulafia
9 years ago

Tom Hurka, Thank you for sharing some of the deliberations of the Board with us. I would like to follow up on the problem that Board members said they would have assessing atmosphere and other factors. Although you supported taking these factors into account, you say, “Another group opposed it, arguing that evaluators don’t know enough about these factors to assess them accurately in distant departments but will probably rely on hearsay, and that group won the day.”

I’ve looked at Keiran Healy’s comments and data posted by Leiter. http://upnight.com/2014/11/19/the-incovenience-of-rigor-the-pseudo-science-of-the-pgr/ He tells us that, “Respondents love rating departments. A small number of respondents rated 25 departments or fewer, but the median respondent rated 77 departments and almost forty percent of raters assigned scores to 90 or more departments of the 99 in the survey.” We are also informed that in the U.S. the median was 81.

So here then is my question: has the Board discussed how much hearsay would be involved in anyone trying to rank 90 or more departments? The issue is not only knowing enough about individual departments, but having enough comparable information about them to be able to rank them.

p
p
9 years ago

Ed Kazarian: I never said that their non-participation constitutes any harm (and so I also did not engage in “constant reiteration” of that – unless saying something once constitutes constant reitaration). I said that IF they were invited but refused to participate, then one could raise the question (which I rhetorically raised) whether their refusal ALSO does not contribute to the inequality in the same way in which PGR allegedly does. I did not explicitly say whether the answer is yes, but since I raised doubts that PGR does it, so it should be clear that I thought that the answer is YES if PGR does it (but since I doubt PGR does it, they most don’t do either). My worry was that by refusing to participate, the survey becomes less useful that it would be otherwise.
Bharath Vallabha: the methodology is well-known, I thought. Experts nominate other experts so as to achieve a panel that has an opinion that reflects the current state of the art. You might not think that Tom Hurka is an expert in his field and that he is incompetent to nominate others or let alone evaluate anything, but perhaps that view is not widely shared. I assume that in the beginning Leiter consulted his colleagues in the department and friends in philosophy whose opinion he valued and then those helped further and so on. Many of them value the work of people with whom they fundamentally disagree. So it’s a bit odd to accuse of idiosyncratic views a group that has a wide range of opinions.

I partly share and partly do not D. Velleman’s view (though the way he states them isn’t the way I would do so – if there is nothing to measure, then why evaluate anything? why not publish everything that’s submitted to journals? who is to say what’s better? a random evaluator chosen by an editor who has been appointed by whom? – this seems to me as bad reasoning as any, but it’s also somewhat analogical, isn’t it?). Philosophy has indeed many approaches and traditions and PGR in no way measures them all. But it does measure much of what comes up once one is on the job market. I also think that the rankings do not capture the state of the art as they should (but then that is my idiosyncratic view of what matters, isn’t it? as it is of all people who posted here) . In any case, the choice of grad school is a complicated matter. When I was a prospective student I was simply not in any position to judge whether a paper X was better than Y in the way in which a reviewer working for a journal might be able to. In fact, I knew nothing of approaches and traditions that you speak of in any greater depth than as slogans thrown around without much content. Would you trust the judgment of prospective grad students as reviewers for Phil Imprint – since they apparently can judge quality that well? If yes, then OK. But if not, then why would you trust her judgment on the right choice of grad school in terms of faculty quality? Is she not better served by doing the research you mention, having the survey at her disposal, and consulting her advisers? Isn’t more information better than less?

Just to repeat: I do wish there were more evaluators, and the rankings were more inclusive. But I can’t bring myself to see the PGR in the light in which a lot of people here do.

David Wallace
David Wallace
9 years ago

I evaluated for PGR 2014 – mostly for philosophy of physics – and, pace David Velleman, I don’t think I’m kidding myself that I’m participating in something helpful for graduate students, at least in my area. Obvious disclaimer from the start: I’m delighted that Oxford was placed first and I’ll leave readers to apply whatever discount weighting seems appropriate for that conflict of interest.

Ten people evaluated for philosophy of physics in 2014 (down from 11 in 2011, 1 women each time, for those keeping count) and, to be honest, the list of recommendations they produced, at least for the top dozen or so, is pretty much what I’d have produced by myself. So one of my undergraduates wanting to do philosophy of physics wouldn’t have been helped much by PGR, though arguably if I had any particular idiosyncratic likes or dislikes they’d have been shielded from them.

Even five years ago, I’m not sure that would have been true. Being in the UK inevitably means my grip on the structure of US institutions, and the location of various US philosophers, is going to be a bit shaky, because someone’s physical location isn’t that salient if you only encounter them as a name on a paper. I travel quite a lot and by now have more or less worked it out, but that’s a relatively recent thing and to some extent turns on the fact that I’m in a privileged position with lots of opportunities to travel.

But more importantly, very few applicants for grad work in philosophy of physics have access to a research active philosopher of physics. Only a minority of philosophy majors will – and plenty of people come into philosophy of physics from physics or maths backgrounds. Those people need to have some way to work out where to apply, and I disagree with David Vellemann that a couple of days of research will do it. For a start, assessing the quality of someone’s work is pretty tricky if you’re only undergraduate level – in philosophy of physics some of the best papers will be literally unreadable without the kind of math background that you’re likely to pick up only at graduate school. You may well not know which journals to look in (how many non-philosophers of physics know that Stud.Hist.Phil.Mod.Phys. is the place to publish anything specialist and technical) or which archives to check (philosophy of science mostly uses philsci-archive.pitt.edu, philosophy of physics uses arxiv.org – but plenty of students don’t even know either exist). Citation rates aren’t that easy to find for people without public profiles on google scholar. (And are themselves scarcely reliable.) Most importantly, how do you know who to research in the first place? You could, of course, look down the faculties of the most prestigious universities to find the philosophers of physics – and Oxford would do just fine that way – but UC Irvine ranks about 50th in the THE rankings, Western Ontario about 100th.

All in all, I suspect that a seriously diligent undergraduate with very developed research skills could produce a halfway reasonable list after several *weeks* of work. That is unrealistic as an expectation for prospective applicants trying to put together their application in the middle of full-time study, and in any case I have no reason at all to think that they’d produce something epistemically *more* reliable than PGR (they will have spent much longer on the task than a PGR assessor did but will have been hugely hampered by a lack of specialist knowledge up-front and a greatly reduced ability to judge the relative merits of specialist work).

The PGR speciality ranking for Philosophy of Physics can’t and doesn’t pretend to be a definitive scientific study of which universities have the best philosophy of physics groups or individuals. As has been pointed out at length, it’s not even clear that’s well defined. It’s an aggregate of informed opinion, from a moderate-size group of informed people. It doesn’t really offer anything to the prospective graduate student who has nine or ten* well-respected philosophers of physics on speed-dial. For those that don’t, it’s an approximation of that. It isn’t as good as the Platonic ideal of a resource for Philosophy of Physics admissions, but the thing about Platonic ideals is that they don’t actually exist. If and when an actually-existing rival resource for Philosophy of Physics turns up, I’ll weigh it on its merits against PGR. Until then, PGR is, (a) for philosophy of physics, way better than any currently available alternative heuristic for the great majority of applicants, and (b) sufficiently good that I don’t feel an urgent disciplinary need to do better in the face of other pressures on my time.

Nothing in this comment speaks one way or another to the claim that PGR, while useful to applicants, is so deleterious to the profession in other ways that it should be abandoned. (I obviously don’t share that view but I’m not addressing it here.)

* nine well-respected philosophers of physics plus me!

Bharath Vallabha
9 years ago

David Wallace, thanks for your thoughtful comment. Can you explain the process by which you became an evaluator? Here is the reason I ask. I understand that the phil of physics rankings are the aggregate opinion of ten people. But in order to better understand the rankings, it is necessary to know what the ten people have in common such that they, and not some other ten people, are the evaluators. Is it that the ten people have a broad range of approaches to phil of physics, and so aim to be diverse in that way? Or is it that they share some common assumptions about phil of physics? I can believe that the ten people are great at phil of physics, but to someone like me who doesn’t specialize in the subject that provides no contentful information about what the ten people have in common. It would be helpful to know what criteria were used in selecting the evaluators. So if you can shed light on how you became an evaluator, that would be great.

David Wallace
David Wallace
9 years ago

Bharath: Berit Brogaard wrote to me and asked me to evaluate; I said “yes”. I don’t have any particular insight into the basis for the invitation.

Eric Winsberg
9 years ago

David: I don’t disagree with anything you say, and I certainly didn’t mean to be claiming that the evaluators in phil physics had done a shoddy job or that they shouldn’t have put more work into it. And I agree the list probably serves prospective grad students fairly well. But the people who *in fact* are more affected by these list than prospective students are recent graduates/job seekers. And for them, having your department move from being on-the-list to not-on-the-list can make the difference between a career and no career. The fact that very tiny changes to the circumstances of evaluation can make that deal-breaking difference is a bit disturbing. Of course, I know what the reply is to this: “folks shouldn’t use the PGR for that purpose.” But…

Eddy Nahmias
9 years ago

As I’ve said in earlier discussions here, I think that PGR is better than existing alternatives and that PGR would be better off with a new editor (it clearly would have been better off this time with a new editor given the drop in evaluators, especially women). I have a question for anti-PGR people. If all talk of “rankings” were removed and the area specialty “information” was presented as “aggregated opinions of *some* respected figures who work in the area, broadly construed,”, presented in some numerical form, wouldn’t that aggregated information provide a very useful supplement to the research students should be doing themselves and the individual opinions they get from their advisor(s)? Then maybe programs could be listed in a chart with all the numerical ratings of their areas so people could get some sense of the overall program. (I suspect if someone then aggregated all these area ratings, an overall rating would emerge that would look a lot like the PGR rankings, though maybe not if all the areas were weighted equally.)

The above represents the way I *try* to use the PGR with my students, though I’m sure I get influenced by the *rankings* more than I should, given my goal of using just information about aggregated opinions. I haven’t actually read the intro to the PGR in a while, but I suspect it *presents* itself in roughly this way, even if neither it nor its readers typically read the report in this way. But my question is why this way of understanding the area ratings is problematic. And if we could get some other useful sources of aggregated information (e.g., placement, climate surveys, attrition rates, breadth of course offerings, etc.), then these area ratings would just be one more useful bit of info.

Speaking for myself, I think the ratings of phil action and phil mind are certainly roughly right (if the elusive target is some vague mix of actual quality and perceived quality of the researchers in those areas), and they provide useful information for students and faculty advisors (and hence, I use the other area ratings to provide useful info to give my students advice).

Regarding the early discussion on this thread: Philosophy of action (incl. free will) notoriously has difficult boundaries, as indicated by the fact that grad students specializing in it can have a hard time pitching themselves for jobs that don’t explicitly list AOS in phil action–depending on their spin on the issues, are they M&E or Ethics or Phil Mind or none of the above or all of the above? My guess is that the ratings in PGR skew towards some areas in phil action more than others–having more raters would provide better aggregated information.

anonymous prof
anonymous prof
9 years ago

Fritz: I have no evidence that you and I think of the boundaries of the field differently. That’s because I have no idea what your rankings are or the criteria you used to arrive at it. What I do have is the rankings that resulted from the input of all 10 evaluators. And it is pretty clear to me that it emphasizes certain kinds of work over others. This is reinforced by the descriptions of the specializations since practical rationality is, for some odd reason, grouped with normative ethics rather than philosophy of action.

My main point isn’t that I would supply a different ranking. My point is that the artificial way in which specializations are grouped has a negative consequence of elevating certain kinds of work and devaluing others.

As others have pointed out, this isn’t the biggest problem with the rankings. But it is a problem.

Anon faculty
9 years ago

I’ve seen David Velleman’s argument repeated several times, but it seems deeply flawed for fairly obvious reasons. Velleman’s argument hinges on two points: (1) that the PGR rankings aren’t just distortions, but instead fail to have any object; (2) that prospective graduate students can spend a couple of days researching programs and thereby make a good decision about which program to attend. These both seem false. Here’s why.

On (1): Everyone can agree that the PGR rankings are flawed in various ways and subject to a number of biases. But they attempt to measure the quality of the work being produced by various philosophers. As others point out, journals and tenure review committees do just the same thing. When Philosophy and Public Affairs displays obvious biases towards work on Rawls and his followers, when Philosophical Review emphasizes a certain sort of metaphysics and epistemology over social and political philosophy (or early modern philosophy over other historical areas), many of us see these as departures from the ideal of impartially measuring philosophical quality. We see these journals as failing to identify quality within certain areas/types of philosophy. And this isn’t a matter of corruption or maliciousness; it’s just that there are so many opposing conceptions of philosophical quality that it would be difficult to construct a journal that is in a good position to recognize all of them. But we don’t see these journals as therefore failing to have something to measure.

On (2): As David Wallace points out, this claim vastly overestimates the resources and capabilities of undergraduates. Let me speak from experience: I was an undergrad at one of the top-ranked liberal arts colleges; I took my PhD at a program in Leiter’s top 10; and I’m now tenured at a school ranked in Leiter’s top half. If I had followed David’s advice as an undergrad, and spent a few days poking around the websites of various programs and reading papers, I would have had absolutely no idea what I was doing. As a senior undergraduate, I had only the vaguest sense of how to discriminate better and worse philosophical papers. I wouldn’t have looked at a list of faculty or read a set of papers and somehow discerned that people like David Lewis or Harry Frankfurt or Tim Scanlon or David Chalmers were doing exemplary work. My training, as an undergraduate, largely consisted of reading the greats–I studied Aristotle, Plato, Hume, Kant, Nietzsche, et al, but probably read a total of 30 or 40 contemporary papers in the course of my undergraduate degree. This simply doesn’t put one in a position to make an informed judgment about the quality of various programs.

Aside from that, there’s the time commitment: suppose an undergraduate decides to look at just ten graduate programs, each with ten faculty; and she decides to read just one paper by each faculty. These numbers are very small — realistically, many students would want to investigate many more programs than this– but we’re already at 100 papers. We’re really supposed to imagine that a busy undergraduate, striving not only to do well in courses, apply to graduate programs, take the GRE, perhaps write a thesis, perhaps work a job, etc., is going to have time to do that? And we’re supposed to imagine that, if she did, her judgments would be superior to the aggregate judgments of the PGR evaluators?

anonymous prof
anonymous prof
9 years ago

I also want to clarify one of my earlier comments since it was criticized elsewhere. I said “The fact that BL’s presence as editor has made women disproportionately unwilling to participate in the evaluations is reason enough for him to be unsuitable as an editor of the PGR.”

BL’s choice to stay on as an editor this year is entirely his decision. But BL says again and again that this is all about the students. He says his primary goal is to create something valuable for students looking to enter grad school. Well, it seems pretty clear to me that his continuing as editor has made the product far worse. Not only are there less evaluators, but women have jumped ship in disproportionate numbers. My only claim what that, if his stated ends really are his ends, BL should (and should have), step(ed) down of his own accord. He should recognize that he is “unsuitable” because his past behavior now almost guarantees that the PGR is less valuable.

This is not to say that if there were more evaluators, everything would be just fine with the PGR. There are numerous problems that have nothing to do with the number of evaluators. But even fans of the PGR should recognize that this version of the PGR is far worse than past versions and BL is largely to blame for this.

another anon prof
another anon prof
9 years ago

The fact that there may be fewer evaluators this year doesn’t make the PGR less valuable, unless there is reason to think it is so much fewer that the results are not sensible. But so far there is no evidence of that, is there? When the attack on BL began in late September, he had already been editor of the current PGR for many months (maybe years, depending on how much work goes into preparing a new edition). It seems silly to just pretend he wasn’t the editor or that someone brand new could simply take over all the work. As Alex Rosenberg argued on this blog, there were also plenty of reasons to be skeptical about the grounds for ousting him.

Bharath Vallabha
9 years ago

David, so it seems like you said “yes” without any sense for who the other evaluators would be, or what criteria Brogaard is looking for in choosing evaluators, and so without knowing what kind of aggregate opinion you are contributing to. Weren’t you concerned that perhaps the other evaluators might be so similar to you that the rankings might not reflect diverse viewpoints, or conversely, that the other evaluators might be so different from you that all of you might not be evaluating the same things at all? Without having a sense for the criteria by which you were chosen, how do you know what you are signing up for?

P, I am not questioning Tom Hurka’s expertise. My point isn’t that the rankings are idiosyncratic. It is that without knowing by what criteria board members and evaluators are selected, I don’t know what to make of the results. I don’t know to what extent the participants fundamentally agree or disagree, or even what they agree or disagree on. I am assuming prospective grad students are in a similar position of not knowing.

David Velleman
9 years ago

I apologize for being unclear. I did not mean that undergraduates can read and evaluate research on their own. What I meant is that they can follow trails of citations, filtering for their interests along the way, until they get an idea of which work is generally regarded as worthwhile. Then they can do reverse searches to see who is citing that work. Then they can look at online syllabi of those people to see who they assign in their seminars. And so on. It’s the sort of thing we all do when starting to learn about a new topic if we don’t have someone to ask about what to read.

Literature searching of this sort will identify which departments have philosophers whose work in these fields passes peer review at selective journals, gets favorably discussed in work that passes peer review, and so on. A look at the placement records of those departments will identify which ones are successfully placing students writing in those fields. Taken together, these inquiries will tell a student which programs are gateways to careers in fields they want to study. Is there a more fine-grained reality that departmental rankings can be “certainly roughly right” about (as Eddy Nahmias suggests)? I don’t think so.

David Wallace: I take your point about philosophy of physics, but it is a very unusual sub-field of the discipline. For such special cases, the PGR is overkill.

anonymous prof
anonymous prof
9 years ago

So of the 10 specialty rankings released there are 221 evaluators (if the same person shows up as an evaluator in more that one specialty I count each of these separately). By my count, only 35 of the 221 are women. That is a miserable 6.3%.

BL has spent time on his blog defending the inclusion of someone disciplined for sexual misconduct as an evaluator. I wonder if he will spend the same amount of time explaining or defending the lack of women evaluators.

anonymousplease
anonymousplease
9 years ago

I wonder if he’ll spend the same time (any time) defending his inclusion among the evaluators persons who have, in public petitions, defended the view that being gay should be a *fireable* offense for a philosopher. Does he really think it’s appropriate to have such persons evaluating the value of the work of folks, some of whom are gay and out enough to be known to be gay by evaluators who have publicly declared that to be an acceptable grounds on which to fire a philosopher?

Joe
Joe
9 years ago

I think it’s extraordinary that there is still any argument over what sort of rankings would be most useful to prospective graduate students. Subjective “quality” impressions, even if they were gathered responsibly and from a diverse group of scholars, are nowhere *near* as useful as placement rankings. Surely, if a department is a good place to be a graduate student, that will in the first instance show up in its ability to place its graduate students. I can’t think of any respect in which a department is supposed to be “good” for a graduate student that would not in some way influence the department’s placement record (save possibly for climate issues, which can be invisible and require special attention). Moreover, placement data is what it is: hard data. I sincerely hope that Caroline Dicey Jennings will have put an end to this nonsense before anyone gets the chance to do another “quality” ranking, and I strongly encourage undergraduate advisors to direct their prospectives to her project. Unfortunately, no-one loves a prestige hierarchy like a philosopher, and I fear that we will be dealing with what is in essence a distraction for decades to come.

Tim Kenyon
Tim Kenyon
9 years ago

Joe: “Surely, if a department is a good place to be a graduate student, that will in the first instance show up in its ability to place its graduate students.”

I agree with much of what you say, but I doubt this quoted bit and what follows on from it. People do say about their former programs things like, “It was an unpleasant environment in which to be a grad student, though they’ve been fairly successful at placing students in academic jobs.” It’s unlikely that these people have misunderstood the character of their own experience. There are a lot of things (many internal to the department) that affect whether a department is a good place to be a graduate student, and a lot of things (many external to the department) that affect the relative rate at which graduates get hired into academic positions. Seems to me they come apart fairly easily.

There are other very serious problems with a placement-based program *ranking*, to my mind, but I do agree that graduate employment information, academic and otherwise, should be generally available to prospective students.

p
p
9 years ago

Joe: I think this was the common ground and something even BL emphasizes over and over again and in fact pressed departments to publish this data (compare to other disciplines in humanities where this data is largely not available). The problem is that it is not easy to come up with “rankings” based on placement since it is a complicated issue how to evaluate it – we would need to know how many students came in, how many of them graduated and went on the market and in what way, whose students they were and if that had something to do with it (and if that particular faculty is still around), what kind of jobs they got and in what kind of jobs the department manages to place its students and so on. It looks like this should be the way to go, but each department is highly particular in this respect. For example, some departments place less people but those they place get great jobs in certain particular fields. Other departments place more people, but in generally more teaching and community college jobs, often not tenure-track. Others place well, but only in certain kind of places, say religiously affiliated. So you can see BL’s lower rank departments placing people better than higher rank because, for example, the lower ranked has one or two specializations in which it excels, whereas the higher ranked is just better overall (mainly in LEMM) but does not really compete that well. Some departments also do not provide the best data even while they do provide a lot of date. For example http://philosophy.emory.edu/home/graduate/Placement.html looks impressive, until you realize some of the jobs are not tenure-track or not in philosophy and so the list is not well differentiated. It also does not include places comparable to Emory itself. On other hand, some departments, say http://philosophy.ucr.edu/student-placement/ or https://dornsife.usc.edu/phil/placement-record/ place in some more “prestigious” (or comparable to themselves) places (though not only those by any means), but the former’s list for each year includes also people who have gotten job before too (which is not a bad thing, of course) and neither of them list those who did not get other than part-time instructor positions (as I think Emory’s list does, though it does not say so explicitly) – if there are any of such students of course (I don’t know). Of course, it is unclear to me why they should list them – this could be a disservice to them! So it is a complicated and delicate issue and perhaps the students are better served not by rankings of departments by these measures, but by considering them case by case in relation to other data and their interests and so on. So sometimes what looks like a good quantitative measure can be quite hard to “rank”. Still you are right – a crucial piece of information. Well, so I think at least, others might disagree.

Jamie Dreier
Jamie Dreier
9 years ago

David V., I can think of some programs that have really great metaethicists, who so first rate work and would be terrific to study with but do not have a lot of citations in the literature. After a prospective grad student does her research, therefore, it would be wise for her to get some expert advice too, in case there are excellent matches that she’s overlooked. The reason I participate in the survey is the same as the reason I’d be happy to advise her if she asked me.

Anonymous prof: 35 out of 221 is 15.8%, not 6.3%. (I think you divided upside-down and lost track of the decimal.)

Anonymous prof
Anonymous prof
9 years ago

whoops. Sorry about the mistake. Don’t expect me to be evaluating Phil math anytime soon. Still a pretty dreadful percentage though.

grad
grad
9 years ago

I think you are reading the quoted conditional backwards. That good placement and good experiences can come apart is irrelevant on the most natural reading.

Andrew Sepielli
9 years ago

Martin Shuster: I totally agree with your point about how those influenced by Merleau-Ponty, Heidegger, etc. often don’t receive the recognition their work merits. In a way, it is some consolation that this stuff is taken more seriously among some cognitive scientists; but of course, that just prompts the question of why the cog. scientists’ work isn’t being taken more seriously among some mainstream philosophers! But I think you’re being a little harsh on the other commenters. I don’t think most people are thinking “Oh, yeah, the phil of action rankings are a little surprising, but the rest of the PGR is unimpeachable!” I think people are using the phil of action rankings as an excuse to engage in a bit of casual intellectual sociology of the profession (or at least one part of the profession). It is what, in the days before the revolution, we used to call “fun”.

Anne Jacobson
Anne Jacobson
9 years ago

There would have been one more female evaluator for the PGR, but an unfortunate crisis wiped out several days right before my evaluations were due in.

Mitchell Aboulafia
9 years ago

I have no doubt that those supporting the PGR here are well intentioned. However, there has been a failure to confront the seriousness of the methodological failings of the PGR, which then allows people to say: it’s better than nothing. First, we need to be clear that the PGR is not only a ranking of specializations, which has been focused on in this thread. It provides overall rankings of departments: the best in various countries. Overall rankings have been at the heart of the PGR from the get-go, and it continues to be central. If you think that it is not crucial to the project, consider Leiter’s first reports on the results for 2014 PGR. Within hours of the deadline for this year’s survey, Leiter posted the following on his blog. “A few preliminary results from the PGR surveys: biggest upward movers in the overall rankings between 2011 and 2014.” Notice the language: “biggest upward movers,” and think about what kind of mindset is operating here. (Think of those shows and magazines on celebrities in Hollywood, etc.) Again, this is first thing Leiter posted. The next post by Leiter was not on specializations. It was more on overall rankings, “Some more PGR ‘overall’ ranking results for 2014.” This was the next day. A day later we got, “More PGR ‘overall’ results: top 5 programs in Canada and Australasia.” All of this took place before any results were posted for specialty rankings. (And I might add, no information was provided on the number of people doing the rankings.)

I will repeat here what I said to Tom Hurka above, “I’ve looked at Keiran Healy’s comments and data posted by Leiter. http://upnight.com/2014/11/19/the-incovenience-of-rigor-the-pseudo-science-of-the-pgr/ He tells us that, ‘Respondents love rating departments. A small number of respondents rated 25 departments or fewer, but the median respondent rated 77 departments and almost forty percent of raters assigned scores to 90 or more departments of the 99 in the survey.’ In the U.S. the median was 81.”

Leaving aside all of the other methodological issues with the PGR for a moment, just consider this one set of numbers, and ask yourself, do you honestly think that the vast majority of evaluators are in a position to rank so many departments fairly and accurately (almost 40% ranked 90 or more departments)? And then consider how important the overall rankings are to the PGR.

For those who say, but I only ranked a specialty. I say, even if you only participate in the specialty rankings, you are still supporting this enterprise, and the overall rankings has been and remains crucial to the PGR.

Regarding specializations: I don’t doubt that those who have participated in the specialty rankings find the results basically on target. But of course this is part of the problem. When you employ snowball sampling, you are bound to build in biases, which are not obvious to the participants, because so many belong to the same or overlapping networks. (See Zachery Ernest, “Our Naked Emperor: The Philosophical Gourmet Report.”) But there is another issue here. Take the discussion of the Philosophy of Physics above. It’s a relatively small field. There is no reason that the evaluators would know about how other specializations are faring. But if I told them that in 19th Continental Philosophy, five out of 22 evaluators were not experts in this field, and that Nietzsche, Leiter’s main interest, was terribly overrepresented–to the tune of eight Nietzsche specialists, with one on Kierkegaard, and one on Marx–it should give them pause about assuming that other specializations are reliable. And if you go through the specializations, you will discover other serious problems, e.g., lack of gender and ethnic diversity, as well as the fact that you can find significant overrepresentation of graduates from certain Ph.D. programs. (Nine out of 16 evaluators for Metaethics in 2014 have their Ph.D.s from two graduate programs.) But if you were focused on your one specialization, and it happened to be a pretty small one, you might never know any of this.

Students looking at many of the specializations may be seriously misled, and this is not an acceptable form of collateral damage.

I understand people are busy and that the PGR may seem like a handy tool, one which merely gives prospective students an idea of what some people in some specializations think. But the PGR as it stands in not an innocuous tool of this sort. It can mislead students about what is possible or good for their academic careers. It allows departments to make assertions about their value based on a survey with a flawed methodology. It marginalizes swathes of the profession. It feeds a mindset that we should be resisting in philosophy, namely, that philosophy is about reputation and the goal is to climb the professional ladder to reach a top ten school. Okay, I will stop. (I’ve argued in detail on my blog about these and other issues.)

One last point, the alternative is not the PGR or nothing. The fact is that since the PGR was initiated information on the web, and students’ ability to use it, have markedly improved. There is more information and it is more readily available. People are working on producing additional information, for example, placement records. Further, there is an alternative to the PGR: a comprehensive information web site with a sophisticated search engine. And it is clearly doable. Let’s get to work on this alternative, one based on data, and stop talking about how the PGR is better than nothing, which in all honesty has now become a hackneyed form of special pleading.

Roberta L Millstein
9 years ago

To follow up on Mitchell’s last point, here is an example of one such alternative, for philosophy of biology:

philbio.net

(this was among Justin’s “heap of links” not too long ago).

Bharath Vallabha
9 years ago

An issue that didn’t come up in the earlier discussion of philosophy of action: one view is that the traditional compatibilism issue is a dead-end and that focus on practical reason is a way of reframing that debate. So there are not here two different topics, but different research agendas on how best to understand free will. I find the phil of action rankings confusing because it is unclear which of these research agendas they are talking about. Or are they saying that no matter which research agenda you care about, these nine schools are the best places to pursue them? It is unclear why the reader should believe this without knowing which research agendas the evaluators themselves prefer.

The same issue can be raised with every speciality ranking. Another example: if a person wants to pursue a broadly Wittgensteinian take on the mind-body problem, it is hard to see how NYU or Rutgers would be the best departments to go for that. Even granting that people at those departments know a lot about Wittgenstein, knowledge isn’t enough for pursuing a research agenda; commitment to that way of thinking is also required. Are the phil mind rankings saying that no matter what research agenda you want to pursue, these are the best departments to do so? That’s hard to believe. It is also hard to see on what basis the evaluators can make a claim like that.

Ranking is both descriptive and normative. I have no problem with it being normative, involving what research agendas the evaluators would like to see pursued. But acting like the ranking are mainly descriptive only leads to confusion, as well as bad-faith.

anon faculty member
anon faculty member
9 years ago

Bharath: the obvious conclusion to draw from your (apt) observation concerning the mind-body problem issue is that the people who participated in the survey take, on the whole, a rather dim view on Wittgensteinian approaches, and (again on the whole) favor the kind of “problem-solving” philosophy that is most obviously associated with departments like Rutgers, NYU or MIT.

anonymous prof
anonymous prof
9 years ago

So I’m somewhat of a realist when it comes to these things and I highly doubt that the rankings will go away altogether. I’m not even convinced that they should. But they can be dramatically improved. From what I have been reading, it seems like the following would be a start.

1) Increase the number of evaluators and the backgrounds of evaluators
2) Choose evaluators in a more methodologically sound manner (snowball sampling is absurd!)
3) Increase the number and variety of specializations evaluated.
4) Totally eliminate the overall rankings!!!

j.
j.
9 years ago

one interesting alternative which is suggested by bharath’s comment at 64 is that the experts gathered for the purposes of the survey could produce some kind of description or report of their opinions (and the bases for them). if they’re experts, professionally trained, etc., then can’t they come to some kind of consensus? (doesn’t the rating process suggest the possibility of their doing so?) and if not a consensus on all points, at least a consensus about where their points of agreement and disagreement are?

descriptive opinion reports of this kind, including something like justifications, seem much more appropriate for philosophers talking about other philosophers’ quality as researchers and teachers.

and they would allow interested prospective students to understand something of the reasons for the rankings, and to make their own decisions accordingly. i may not have been a professional as a senior undergraduate, but i was certainly capable of distinguishing between approaches that were both (a) more or less current, and well-reputed, in the profession at large, and (b) more or less congenial to my own interests and talents, regardless of whether those happened to align with what philosophers selected as evaluators regarded as best for me to study on my way to becoming a professional. what bharath is saying about the philosophy of action breakdown seems to be an instance where actual discussion of evaluations, rather than numerical rankings, could have provided just the relevant sort of information to a student in a position similar to the one i was in, but interested in philosophy of action. likewise with, say, 19th century philosophy; if evaluators had to produce rough justifications for their evaluations which emphasized so-and-so’s influential work on german idealism, or so-and-so’s recent new work on connections between hegelianism and the american pragmatists, that could conceivably be quite useful to students. (even more so, say, if they’re told that the group of evaluators shows a notable bias toward specialization in nietzsche; even an undergraduate will be well aware that specialization tends to come with its affinities and anathemas, and generate its insights and its blind spots.)

what i am imagining here is not all that different from, say, annual year-end lists run by arts periodicals – music, books, etc. they list ‘bests’ by looking for the biggest vote-getters from critics invited to submit ballots/personal lists. but they pretty much without fail include writing from the critics involved, writing which aspires to help make sense of the rankings and of the things ranked.

are there strong reasons for thinking that numerical rankings by invited experts are in no need of, or would not be vastly improved by, being accompanied by explanatory/elucidatory ‘reports’ of the relevant exports about their rankings? are there strong reasons for thinking we can’t do the one (explain) if we’re already doing the other (ranking, rating)?

Bharath Vallabha
9 years ago

J, Great suggestion. I think what you say also brings out an important difference between a professor giving grad school advice to a student in person and PGR as it is now. In the in person situation, the student knows what kind of research agenda the professor has, and the professor often has a sense for not just what topics the student wants to work on, but also what kind of research agenda in that topic she wants to pursue. And given that the student is asking the professor, some shared sympathy in research agenda is assumed by both parties. But this is lost when there is just an impersonal aggregate of opinions as with PGR. The kind of explanation you mention would at least make it a little more like the in person case. It would make explicit what research agendas the PGR evaluators as a group are recommending, and so how exactly to understand the PGR rankings.

Shea
Shea
9 years ago

Since the PGR is pretty explicitly about the quality of research produced by the department, wouldn’t it make much more sense to develop a ranking system based upon publication records and citations? This would be as objective a measuring system as one could hope for. Why haven’t we been working on on this?

There are probably several answers. First, philosophy doesn’t have a great citation network. This is not a legitimate excuse; it is a problem. It either means that people aren’t really dealing with the literature or that many articles fall outside an established theoretical framework such that their claims are difficult to put into contact with one another.

Second, the journal matters. The problem is that anyone who is opposed to the PGR will also likely be opposed to a ranking system based on publications that gives particular weight to the quality of the journal. If one objects to the notion that departments can be said to have quality, one will also likely be opposed to the idea that journals have quality. But clearly Journals DO have rankings of quality in virtually every other academic discipline. Why would philosophy be any different?

Any methodological issues with the PGR are merely a symptom of greater methodological issues that face the discipline as a whole. We need more specialist journals and more rigorous requirements regarding citations in the peer review process. Then establishing some objective rankings regarding quality of research will be more feasible.

As a final note, I would just like to point out that to deny that there is any objective way to measure the quality of philosophical research is to rest the head of academic philosophy upon the chopping block.

p
p
9 years ago

I kinda dread the mindset displayed in Shea’s post.

anon faculty member
anon faculty member
9 years ago

Shea: there clearly is a hierarchy of journals in philosophy as well, and it is probably not too controversial to pick a list of journals that publish, on average, better work than others. The trouble is twofold: first, how to ‘rank’ the journals within that list; second, the ‘on average’ qualification. I have read work that I found utterly dreadful in journals such as Phil Review, and one can replicate such stories to an extent that there is simply no safe or simple way to move from ‘venue of publication’ to ‘quality’. (Part of the problem is that at many highly regarded journals, it’s the vote of ONE referee that makes or breaks a paper – needless to say, this allows for a lot of bias, subjectivity, etc.)

Things get even more complicated when you take into account the fact that it is much harder to publish in the most highly regarded journals if one has certain AOSs (e.g. phil science, history, political).

Shea
Shea
Reply to  anon faculty member
9 years ago

It’s hard for me to see how any of what you just said is in conflict with anything I said above. For the purposes of the envisioned ranking system all we would need is a list of quality journals that publish quality papers on average. We’re talking about establishing some accurate statistical measure of quality of the research produced by a department. Sure, there are going to be flukes, but the whole point of such a measure is that the flukes will be balanced out in the overall picture. I find it rather baffling that you started talking about making specific inferences about the quality of specific articles given their venue when I was clearly talking about using the flat frequency of articles in top 30/50 journals, etc. to evaluate the overall quality of the research produced by a department. Regarding the last point about AOS… this seems to simply confirm what I was saying about the field needing more specialty journals and less general journals. Note that specialty journals will also tend to have a more consistent peer process because the peers will need to be selected from specialists and the editor will presumably need to be competent enough to know who the relevant specialists are, whereas in general journals the process of reviewer selection may be less rigorous.

Deborah Achtenberg
Deborah Achtenberg
9 years ago

Without wishing to denigrate the evaluators in the ancient philosophy category–I know a number of them and respect their work–the group is a narrow selection. Where are faculty such as Charles Griswold or David Roochnik from Boston University? Or Christopher Long from Pennsylvania State? Or Walter Brogan from Villanova? Or, for that matter, Martha Nussbaum from Chicago or Deborah Nails from Michigan State? Not to mention thinkers who utilize postmodern approaches in the study of ancient Greek philosophy? I respect very much the students of Vlastos and of Vlastos’s students as well as students of Terence Penner and of Alan Code. But, are other traditions or cultures of study not worth representing? What am I missing here?

grad
grad
9 years ago

One thing that specialty rankings don’t seem to capture is the move from, say, 2.75 in 2011 to 3.24 in 2014. I’m not sure if this is that important, but a .49 increase would be huge in the overall ratings. So it seems slightly odd that grad students are told to pay attention to the specialty scores in particular, but these scores are much more course grained.