What Do PGR Evaluators Need to Know?


I very much doubt that I would be able to provide anything like reliable judgments of philosophical quality based on the names of individuals in faculties, without spending an enormous amount of time reading people’s work. Although I’ve been in professional philosophy for nearly ten years, and have gained at least some familiarity with a large number of philosophers, I think I know next to nothing about a large majority of the philosophers whose departments you’d be asking me to rate. At a minimum, I might be able to find and skim the CV of every member of a department in half an hour or so, but to employ any of my own philosophical skill to give anything like an expert opinion, I’d have to read and engage with people’s work. Since we’re talking about roughly a hundred departments, this represents a daunting task to say the least. (Even just skimming CVs, at 30 minutes per department, would take some 50 hours.) Or I could skip most departments, limiting my attention to those containing my friends and those colleagues I’ve interacted with enough to have an opinion about already; but I’d worry about selection bias, and even those departments are made up mostly of people I don’t know. I’m just not comfortable contributing a meaningful opinion about the quality of someone’s work without spending orders of magnitude more time than I could offer if I wanted to.

So writes Jonathan Jenkins Ichikawa (UBC) in his response to an invitation from Berit Brogaard (Miami) to be an evaluator for the Philosophical Gourmet Report (PGR), posted at NewApps. Invitations to evaluate departments for the PGR went out earlier today, according to a post by Brian Leiter at Leiter Reports. A few hundred philosophers are supposed to have received them. Perhaps some of the invitees would care to comment on the epistemic demands of the task.

UPDATE: (10/30/14): Eric Schwitzgebel explains “Why I Will Be Contributing Rankings to the Gourmet Report.”

USI Switzerland Philosophy
Subscribe
Notify of
guest

10 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Michael Dickson
9 years ago

I got my invitation today. I will not be participating. I remember agreeing to do the evaluation around 10-15 years ago. I remember being horrified and sorry after I turned in my rankings. I wasn’t aware of specific errors of judgment that I had made, but even having spent an inordinate amount of time (much more than the 50 hours mentioned) studying the departments under review (not to mention worrying about the departments that were not under review but should have been), I felt certain that many of my rankings were at best capricious. I believe very strongly that the current model of evaluating departments — even for the limited purpose of providing guidance to potential graduate students (but let’s not fool ourselves into thinking that the ranking are used only for that purpose) — is fundamentally flawed and that perpetuating it is a horrible disservice to the profession and even more so to potential members of the profession.

There’s no perfect system of ranking, and it isn’t clear why a system is needed in any case, but the current ‘system’ (“you have been nominated by a member of the Advisory Board as someone in a good position to offer informed assessments…”) is not one in which I can reasonably participate, in part because I am quite certain that I’m not in a position to offer informed assessments, and I have a hard time seeing how anybody is.

One might argue that while individual assessments may not be entirely reliable and well-informed, the collective assessment is. I think there are good reasons to be very skeptical of this claim.

Jo Wolff
Jo Wolff
9 years ago

I declined for similar reasons. I stepped down from the Advisory Board this year, before the recent controversy, on the grounds that I no longer am able to follow work in Philosophy outside a very narrow range. The philosophers whose names I recognise outside my field may have produced excellent work when I was a student 30 years ago, but in the main I have no idea what they have been doing since. Nonetheless I was asked to continue to be an evaluator, which I declined.

Anon
Anon
9 years ago

The critics of the PGR are insufficiently cynical. The PGR measures perceived quality, not quality. It’s supposed reflect the unreflective prejudices of well-regarded members of the profession, which is useful because job market success is largely determined by accidental factors having little to do with quality.

Why, when the purpose of the PGR is to help people on the job market, are the critics so preoccupied with the largely trivial question of *actual* rather than *perceived* quality?

(The same might be said about the value of the PGR to the profession as a whole. If you think this profession is generally about the promotion of real rather than perceived quality, perhaps you’re not cynical enough? Or dare I say, not philosophical enough?)

anongrad
anongrad
9 years ago

Re: Perceived vs actual quality rankings
The Leiter rankings are not just a reflection of biased perceptions on the job market – they are a contributing factor to those biased perceptions. Being cynical is all well and good, but why should evaluators knowingly perpetuate such biases? Opting out publicly on the grounds that the rankings are poor indicators of actual quality (which is what search committees really want anyway) gives us a chance to remove at least one biasing factor from the equation.

JDRox
JDRox
9 years ago

Yes, anon grad, but isn’t it better to have one public and regulated set of biases than for every department to have their own idiosyncratic biases of which most people are unaware? Basic human psychology tells me that departments, search committees, etc. will all still have opinions about which schools are better than others. Those opinions will affect hiring decisions, graduate school admittance decisions, grants, etc. By my lights, it seems much better to have one quasi-official opinion that is publicly known than to have it all be a big mystery. Focusing just on prospective graduate students, here is a simple argument supporting the PGR (or some ranking scheme) that I’ve never seen rebutted:

1. The quality of one’s graduate education depends in large part on the quality of one’s peers–the other students in the program.
2. The quality of one’s peers at a program will depend in large part on how hard it is to gain admittance to that program.
3. Hence, the quality of the graduate education one can expect to receive at a program depends on how hard it is go gain admittance to that program. (by 1&2)
4. How hard it is to get into a program depends on that program’s perceived quality.*
5. Hence, the the quality of the graduate education one can expect to receive at a program depends on that program’s perceived quality. (by 3&4)
6. Prospective students have a vested educational interest in the quality of the graduate education they will receive (in addition, of course, to their job prospects, which points towards another more discussed argument).
7. Hence, prospective students have a vested educational interest in the perceived quality of philosophy programs. (by 5&6)
8. Prospective students also have a vested financial interest in how hard it is to get into a program (so they don’t waste money applying to places that are out of reach).
9. Hence, prospective students have a vested financial interest in the perceived quality of philosophy programs. (by 4&8)

* Essentially, schools admit the best students they can, and the best students apply to (and will choose to attend) the schools perceived to be the best. Hence, the best students will, in general, tend to go to the schools perceived to be the best. A school that is generally perceived to be bad just cannot have high admission standards (assuming they must admit some students from time to time). Likewise, a school that was perceived to be good but tried to have low standards for admission would fail: they would be flooded with good applications and so their admission standards would be de facto high.

Anon
Anon
9 years ago

I’m anon 8:59. I think this is a fair point, but it raises a strategic question. If there is a plausible and practical way to significantly reduce the unfair affects of prestige and bias then, yes, one wouldn’t want to voluntarily contribute to this baising factor. But is that a real possibility?

To my mind, the primary problem with the attempt to reduce bias is that it disadvantages already disadvantaged students of ability equal to their peers. Student from low prestige schools, particularly those with no grad departments whose faculty are unable to give them helpful advice about how and where to apply, are greatly helped by the systematic information about bias that the PGR provides. Without the PGR or something similar, the default option seems to be a much more unequal playing field that strongly favors the students of prestigious departments and faculty over everyone else.

Anon
Anon
9 years ago

I suspect there’s some truth to the view that public, regulated bias can actually promote real quality. But there are some points in the argument that I’m uncertain about.

How much does peer quality depend on program competitiveness? The assumption is, first, that the best students will apply to the most competitive programs and, second, that the most competitive programs will select the best of its applicants.

On the first point: superb students sometimes apply to less competitive programs for a variety of reasons–to work with a particular faculty member, due to unusual program strengths in one area, for reasons of location, etc. Superb students might be skeptical of the reputations of the most competitive departments, with unconventional views about who’s work is important. So, the most competitive programs have a greater draw, but it’s not obvious that that must include all of the best applicants.

On the second point: the generic appeal of prestige and competitiveness will increase applications of every kind: everyone will apply, even if the program’s not a good match with their interests. So it makes selecting the best candidates more difficult, because they have more applicants and a larger variety of *kinds* of applicants to choose from, who are harder to compare qualitatively across disparate areas, backgrounds, programs, etc. My general worry is that the more the applicant pool for a smaller number of slots increases, the more the final cut will be based on relatively arbitrary factors, not on quality.

So, even if all the best students apply to the most competitive programs, I’m not sure we should assume those programs will successfully identify and accept the very best candidates.

John Protevi
John Protevi
9 years ago

Anon at 6 writes: “Without the PGR or something similar, the default option seems to be a much more unequal playing field that strongly favors the students of prestigious departments and faculty over everyone else.”

On the contrary, it seems to me that the PGR increases the favoring of students from prestige schools. Here is a sketch of the “Moneyball” critique of the PGR:

A hypothesis: what the PGR has done is alert students with undergrad degrees from traditionally prestigious schools (Harvard, Yale … those with a correlation but not guarantee of high SES for their students) to graduate school market opportunities undervalued by the traditional prestige markers (NYU, Rutgers … — this is the “Moneyball” angle). This is open to empirical testing, but I would guess that the UG prestige-school percentage of grad students at NYU, Rutgers and other non-traditionally prestigious but high-performing PGR schools does not resemble today what it was in the pre-PGR days.

Now of course there is co-evolution here: the faculty and friendly admins at those benefitting from the PGR’s rankings could leverage that into more hires of those likely to further increase PGR ranking, and so on. Still, the above would be the outlines of the class structure / Moneyball critique.

The Moneyball critique complements two others laid out here: http://proteviblog.typepad.com/protevi/2014/10/three-critiques-of-the-pgr.html

anongrad
anongrad
9 years ago

As Anon (7) points out, many students apply to less highly ranked programs on account of particular strength in certain areas of specialization. These programs will be harder to get into for students applying in that area of specialization, but less so for students applying outside of it. Nevertheless, departments with a concentration of experts in one or two areas of specialization will still draw top students, will still create high-quality peer environments, and thus will still produce high quality job applicants. But overall rankings run roughshod over the fact that many departments have special strengths, and that these special strengths matter for students (e.g. many departments that are low-ranked or un-ranked on the overall list are top-ranked in the specialty rankings). So if perceived quality is a function of the overall rankings, then if students care about the quality of their education, they should ignore the perceived quality of philosophy programs. These sorts of factors make the sub-disciplinary rankings more worthwhile, since those reflect actual departmental strengths somewhat more accurately (although biases will still be an issue there as well).

Also, I’m pretty sure that regulated, institutionalized biases are still bad for the profession insofar as they’re largely unjustified. What is good for the profession is reminding people that using heuristics regarding perceived quality (whether these heuristics are idiosyncratic or institutionalized) is a lousy epistemic practice when it comes to evaluating individuals and entire departments. That’s what I think Jonathan’s letter does.

JDRox
JDRox
9 years ago

Maybe I should have been more careful to make clear that I’m just talking about patterns: of course some good students don’t chose the best overall school they get into, and of course some schools fail at identifying the best students, etc. As far as schools with one unusually good specialty, for example, that’s compatible with my argument, at least with minor modifications. A school with a great applied ethics program like Bowling Green has lots of good students working on applied ethics, good students that one could learn much from. Bowling Green has good applied ethics students because loads of students apply to Bowling Green to study applied ethics, etc. etc.