Appearance and Reality, Take 2 (guest post by Brian Bruya)

Appearance and Reality, Take 2 (guest post by Brian Bruya)


Yesterday’s post, “A Detailed Critique of the Philosophical Gourmet Report,” contained excerpts from “Appearance and Reality in The Philosophical Gourmet Report: Why the Discrepancy Matters to the Profession of Philosophy,” an article in Metaphilosophy by Brian Bruya (Eastern Michigan) in which various criticism of the PGR were summarized. As noted in an update to the post today, Brian Leiter (Chicago), creator and former editor of the PGR, posted a response to Professor Bruya at his blog. Professor Bruya has authored a patient and measured response to Professor Leiter’s post, which appears below as a guest post*.


Appearance and Reality in Brian Leiter’s Attempted Refutation of my Critique
by Brian Bruya

Before I begin, I’d like to make it clear where my critique of the PGR is coming from. Leiter implies, as I note below, that I know very little about Analytic philosophy, and I fear that my defense of Continental philosophy may cause some readers to think that I have some kind of axe to grind against Analytic philosophy in favor of Continental philosophy. I’d like to make it clear that this is an internal critique. I do philosophy in the Analytic style and prefer it (but not to the extent that I think other ways of doing philosophy should be excluded). I was trained as an undergraduate at the University of Washington, which was then (and I think still is today) thoroughly Analytic. A significant difference between then and now at UW is that I happened to attend during a period when they had people also doing comparative work. Karl Potter (Ph.D. Harvard) did Indian philosophy, Arindam Chakrabarti (Ph.D. Oxford under Michael Dummett) mixed Indian philosophy into his straight Analytic perspective, Vrinda Dalmiya (Ph.D. Brown under Ernest Sosa) did something similar, and Chün-chieh Huang (Ph.D. University of Washington) did Chinese philosophy (actually, I think he falls more in the discipline of intellectual history, but his courses were in the philosophy program and he taught two courses as such).

There are many critics of Analytic philosophy from within Analytic philosophy, so my position as a critic is not unique. My perspective, however, may be unique in that I think that Analytic philosophy, in order to be as vibrant, thorough, and socially just as possible needs to widen its perspective into non-Western philosophy (in addition to other methodological and textual resources). I make arguments supporting this position in my article “The Tacit Rejection of Multiculturalism in American Philosophy Ph.D. Programs” (Dao, 14 (3)) and in my edited volume The Philosophical Challenge from China (MIT Press, 2015). The reason I wrote the critique of the PGR is that I see a group of very intelligent and apparently well-meaning people involved in an influential publication that ultimately locks out this wider perspective from the field.

I give long, well-supported arguments demonstrating that as it stands the PGR is exclusive and so damaging to the field. If anyone thinks my facts are wrong or an argument is flawed, please point out the error and state how it affects the overall conclusion of unwarranted exclusivity in the PGR. Leiter, as we’ll see, does not do this.

Or perhaps you think the PGR should remain exclusive. I’ve seen people purporting to be professors comment that they trust the opinions of the PGR evaluators over their own. That’s fine, but Clarence Thomas’ opposition to affirmative action is not a reason that all African Americans should also be opposed to it. If you think that the field of philosophy should exclude traditions representing the cultures of nearly half of all Americans and the vast majority of people around the world, make your argument—but do it in an informed way, by engaging evidence and arguments that are out there already.

I undertook this project because I have a deep love for the field of philosophy and a profound respect for its ability to provide important insights into all aspects of the human condition. It seems counterproductive to systematically exclude insights from the traditions of 5/6 of humanity.

I also should say that I don’t have a bone to pick with the PGR, or with Brian Leiter, or with anyone associated with the PGR. This project did not begin with the question, “How can I bring down the PGR?” Rather, it began with the question, “Why is multiculturalism losing ground in philosophy when it is gaining ground in the rest of the academy?” The question is perplexing on many levels, not the least of which is that from the perspective of contributing to the mission of the wider university in terms of building diversity, understanding globalization, and creating interdisciplinary relationships, it only makes sense that philosophy programs would be considering bringing in more non-Western philosophy.

I began by examining a decades worth of data from Jobs for Philosophers and interviewing programs that had hired in non-Western philosophy. That showed me that it is actually not true that multiculturalism is losing ground in philosophy programs because there are actually robust advertising and hiring numbers in non-Western philosophy, and programs are recognizing its value to the program and to the university. But it also showed that the place where multiculturalism is losing ground is in philosophy Ph.D. programs. So I inquired into hiring mechanisms in philosophy Ph.D. programs, interviewing department chairs in a variety of different departments, including ones that have people in non-Western philosophy, ones who used to have people in non-Western philosophy, and ones who don’t and haven’t. I was able to uncover no structural impediments to hiring in non-Western philosophy. In fact, I found some warmth toward the idea and some interesting recollections about scholars who had worked with specialists in non-Western philosophy.  Then I came across some claims that some departments appeal, officially or not, to the PGR when making hires and even tout their rank to administrations when it comes to seeking funding. So l turned my attention to the PGR. I have to admit that I was as surprised as anyone by what I found. I had trusted that these very intelligent people, many of whom I look up to and admire, had been using a sound methodology and that they had the best interests of the profession at heart. Because I still think the latter is true, I made the attempt at exposing the untruth of the former in the hopes of reform.

 

Now, if you were to write up a 34 page critique of an influential publication, creating a detailed, multi-layered but very clearly laid out argument, with 28 substantial footnotes, 38 supporting citations, and two appendices designed to further elaborate subtle points, the most devastating attack on your piece would be that someone would come along and show it to be shoddy work. This the approach that Brian Leiter attempts in critiquing my article. The problem is that it doesn’t actually hit home in any of its attacks.

In what follows, I relate the appearance of shoddiness that Leiter creates, then provide the reality from my article, followed by a summary conclusion for each. The question to ask for each of Leiter’s claims, just like you would advise your students to do in an introductory critical thinking course, is whether his claims are made in well-formed, non-fallacious, arguments and whether they provide evidence in the form of empirical support or citations from reliable sources. Let’s see how Professor Leiter does.

 

Appearance 1: “The article ignores the participation of the Advisory Board in producing the report for the last 15 years.” [This is a quote from Leiter’s attempted refutation. Each section below will follow this pattern.]

Reality 1: “Brian Leiter… handpicked his original slate of evaluators and has since asked them to recommend more.” (p. 657) [This is and what immediately follows are quotes from my original article to demonstrate the falsity of Leiter’s claim above. Each section below will follow this pattern as far as possible.]

“Leiter’s claim [in defending the PGR on his blog] is that one self-referred person is qualified to select a slate of referees and that a portion of that slate (the “Advisory Board”) then recommends other referees.” (p. 665)

Conclusion 1: Appearance of shoddiness is factually incorrect. I do not ignore the participation of the Advisory Board. Leiter seems to take exception to the fact that I target most of my arguments at him rather than at the PGR as a publication. This is because, as I state in the article, the “Methods and Criteria” section of the PGR is extremely thin, not even mentioning, for example, the snowball sampling method. For this information, and other defenses of the PGR, one must refer to Leiter’s personal blog, which is what I did (and am now doing again).

 

Appearance 2: “Bruya asserts (674), falsely, that the ‘Metaphysics & Epistemology’ (M&E) category in the PGR includes 15 sub-specialties, more than ‘Value Theory’ and ‘History of Philosophy’ together. In fact, the M&E category has only 7 specialties listed, compared to 6 for Value Theory and 9 for History of Philosophy.”

Reality 2: ” An area in the PGR is a general category under which various specialties are grouped. In his “Description of the Report” (2011b), Leiter allows seven distinct areas for evaluators: ‘Metaphysics and Epistemology,’ ‘Science,’ ‘History,’ ‘Value,’ ‘Logic,’ ‘Chinese Philosophy,’ and ‘Other.’ In his ‘Breakdown of Programs by Specialties,’ Leiter (2011e) lists five distinct areas for programs: ‘Metaphysics and Epistemology,’ ‘Philosophy of the Sciences and Mathematics,’ ‘Theory of Value,’ ‘History of Philosophy’ and ‘Other.’ For the purpose of statistical analysis of evaluator area and programs, I have standardized the areas, merging all into the following four PGR areas: metaphysics and epistemology (now including specialties in philosophy of science, mathematics, and logic, which, if not M&E specifically, are methodologically and topically closely allied), value, history, and other (including Chinese philosophy). One could argue that specialties in philosophy of science, mathematics, and logic should not fall under M&E. There is no reason to think, however, that logic, for example, should necessarily be grouped with general philosophy of science into a separate area. It is uncontroversial that many of the specialties of M&E, philosophy of science, philosophy of mathematics, and logic are core specialties of Analytic philosophy. Breaking them out into several more separate groups (as, for example, Kieran Healy [2012b] does) would not alter the conclusions of the arguments made in this critique.” (pp. 685-686)

Conclusion 2: The appearance of shoddiness is factually incorrect and distorts my actual argument. To restate: if one wants to provide a statistical analysis of the methods and methodology of the PGR, one first has to confront the fact that it uses two distinct regimes in categorizing specialties. In order to evaluate these two regimes in a unified way, one has to make a decision about how to unify them. One can create more areas or fewer. Healy created more, which is a legitimate move. I created fewer. But Healy’s conclusions and mine are essentially the same—that however you carve it up the core fields of Analytic philosophy receive a positive bias in the PGR. As I quote Healy in the article, “MIT and ANU had the narrowest range, relatively speaking, but their strength was concentrated in the areas that are strongly associated with overall reputation—in particular, Metaphysics, Epistemology, Language, and Philosophy of Mind.”

 

Appearance 3: These four divisions [Metaphysics & Epistemology; Philosophy of Science, Mathematics, and Logic; Value; and History] correspond quite well to the areas represented by about 95% of philosophers in the Anglophone world.

Reality 3: “It is worth comparing the PGR’s list of philosophical specialties to those put out in a survey from the American Philosophical Association (2013), the largest society of philosophers in the United States. As I’ve already remarked, the PGR has the following number of specialties in each area: M&E—15, value—6, history—9, other—3. The survey by the APA was sent out by the executive director (Amy Ferrer) following the Eastern Division annual meeting (the largest annual meeting of philosophers in the United States) in order to evaluate the success of the meeting and how welcoming the climate was for underrepresented groups. In the demographic section of the survey, sixty philosophical specialties are listed. Compare this to the PGR’s thirty-three and you begin to see indications of exclusivity in the PGR. Using the PGR’s own way of grouping specialties into areas, and standardized as described in Appendix 2, the APA’s grouping would look like this: M&E—11, value—11, history—20, other—18. The differences are dramatic. No longer is M&E the dominant area; instead, history and other dominate, while M&E and value are equally sized minorities.” (674-675, n. 21)

Conclusion 3: Appearance of shoddiness relies on a specific factual claim (the 95% claim) that is unsupported and ignores evidence in the article to the contrary. The unsupported claim diverts attention from the thrust of the argument—namely, that the PGR is excluding a large portion of the philosophical community.

 

Appearance 4: “Buried in an appendix at the end, Bruya finally acknowledges conflating the divisions, with the explanation that, ‘It is uncontroversial that many of the specialties of M&E, philosophy of mathematics, and logic are core specialties of Analytic philosophy’ (686). This is, of course, revealing about Bruya’s biases, and his lack of understanding of ‘Analytic’ philosophy (he might talk to some philosophers of physics and biology to find out what they think of a lot of, say, contemporary metaphysics).

Reality 4: [No critique of any part of the argument is offered by Leiter, so no quotation can be provided in response.]

Conclusion 4: Appearance of shoddiness is unsupported and insinuates that I lack a level of knowledge that is so common to the reader that Leiter need not even explain it. This is a poorly formed argument bordering on an ad hominem attack. We all have biases. This specious criticism from Leiter diverts attention from the fact that the PGR systematically excludes the opinions of philosophers whose biases differ from the biases built into the PGR.

 

Appearance 5: “Snowball or chain-sampling is a perfectly appropriate method of sampling when what you want is a kind of ‘insider’s’ knowledge.”

Reality 5: ” Drağan and Isaic-Maniu 2012 (provides extensive citations of studies that have used snowball sampling); Atkinson and Flint 2001 (explains that snowball sampling is used primarily for qualitative research [e.g., interviews] and for research on the sample population itself); Biernacki and Waldorf 1981 (provides a case study of snowball sampling and the methodological issues encountered); Erickson 1979 (discusses the benefits and limits of snowball sampling; distinguishes it from other chain sampling methods); Coleman 1958 (examines snowball sampling and networks). Reading these articles, one realizes that the PGR actually does not use chain-referral [snowball] sampling in the standard way. To imagine the use of chain-referral sampling in the standard way, you have to imagine a population hidden to you—you want to survey the members of the population, but you can’t find them. First, you find one or two, then you ask them to identify more, then you ask the new ones to identify more, and so on. Since philosophers are easy to find, the only way Leiter did anything like snowball sampling is if he looked for, as he says, ‘research-active’ philosophers (see my Appendix 1). He would have identified a few on his own (using what selection criteria, we can only guess), and then he would have asked those ‘research-active faculty’ to identify others (again, by unstated criteria), and so on. But are research-active philosophers really so hard to find? Of course not—they are, by definition, published. One could conclude that Leiter’s snowball is not about finding a hidden population but about excluding a large portion of an otherwise prominent population, as we shall see. For an excellent overview of purposive sampling as a technique, including numerous examples from prior literature, see Tongco 2007.” (p. 661, nn. 6-7)

Conclusion 5: Appearance of shoddiness is vague and off-target. What does Leiter mean by “insider” and why resort to some insiders while excluding others? I explain the scope of the appropriateness of the use of snowball sampling and explain in detailed arguments why it is inappropriate for attempting to make the general conclusions that are made in the PGR. Leiter ignores these arguments (except for the one just below).

 

Appearance 6: “Bruya’s argument against it is silly: ‘The reason [snowball sampling] is used is as an expedient way to access a hidden population, such as social deviants (drug users, pimps, and the like), populations with very rare characteristics (such as people with rare diseases, interests, or associations), or subsets of populations associated in idiosyncratic ways (such as networks of friendships)….Philosophers are neither social deviants nor difficult to find, as every philosophy program’s faculty list is public information.’ (660-661)”

Reality 6: [No argument is offered by Leiter, so no evidence from the article can be provided in response, except to restate the article itself.]

Conclusion 6: Appearance of shoddiness is unsupported. Again, there is an insinuation that Leiter’s claim is so obvious that it need not be argued for. Leiter offers no evidence to support his claim that a particular argument is silly, again diverting attention away from the argument and away from the counter-evidence provided in the article itself.

 

Appearance 7: “Later in the article, however, Bruya acknowledges that, ‘We want experts to provide their opinions when expertise is required for a sound assessment, and we would not insist on getting a representative sample of all such experts. We see this all the time in academia. We have PhD committees, tenure review committees, grant committees, and so on, which are formed for the purpose of providing expert evaluation. And for none of these do we insist on getting a representative sample. (667)’ Given this admission, one might wonder what all the fuss is about?”

Reality 7: “So, when the PGR draws up a slate of more than five hundred specialists, some three hundred of whom respond, why should we not consider it another example of an academic expert committee—a large and, seemingly, diverse one at that? We have already covered part of the reason—namely, the introduction of bias into the selection process. But why is risk of bias unacceptable in the PGR and not on committees that are so much smaller (and thus even more subject to bias)? First, we have to distinguish between the two different kinds of committee just mentioned. One was the medical-expert kind of committee that is evaluating empirical evidence to offer recommendations according to stipulated criteria. That, of course, is not happening in the case of the PGR. There are no stipulated criteria, so one cannot regard a committee, however large, as offering any sort of valid empirical evaluation. Thus, the PGR expert committee is not comparable to a medical-expert committee. The second kind of expert committee is the referee kind, which involves judging the academic merit of a scholar or a scholarly piece of work. We all know that such judgments are naturally biased and that a submission that is accepted by one journal or press could have been rejected by another of equal standing. The simple fact is that in the world of academic publishing there is no better alternative to this type of committee. One can’t send every article or book manuscript on epistemology to all, or even to a statistically significant random sample of all, working epistemologists. The logistics and the workload would be impossible. We rely, instead, on ad hoc arrangements as a necessary expedient. If we accept bias in academic committees because there is no better alternative, why not do the same for the PGR? The reason is that the logistics are entirely different. The PGR survey is undertaken only once every few years, and the online survey already exists. There is no practical impediment to moving to a valid sampling procedure. Perhaps that point came too quickly. The reason that the PGR should not use an ad hoc committee of expert evaluators, even though such committees are often used in academia, is that it does not need to. It could just as easily use a valid sampling procedure. Using a nonrepresentative sample and then generalizing from it is misleading. As quoted above, the PGR says: ‘This report ranks graduate programs primarily on the basis of the quality of faculty. In October 2011, we conducted an on-line survey of approximately 500 philosophers throughout the English-speaking world’ (Leiter 2011b). There is no reason for anyone reading this claim to suspect that the sample is not representative of the entire population of working philosophers or therefore to suspect that the conclusions drawn from the sample cannot be generalized across the entire population of philosophers. And yet such a supposition would be flatly wrong. One must again attend to the fact that the sample used by the PGR is as notable for those that it excludes as for those that it includes. The simplest thing for the PGR to do to improve its validity would be to open up the evaluation pool to anyone listed on a philosophy program’s faculty webpage. Given the electronic resources that Leiter has already mastered, getting the word out would be neither difficult nor time-consuming.” (667-668)

Conclusion 7: Appearance of shoddiness falls flat. The objection ignores the argument that answers the very question it asks, again diverting attention away from the point of the argument itself. The fuss is about the systematic exclusion of 99.5% of all philosophers from participation in the PGR (see article for substantiation of this statistic).

 

Appearance 8: “In an unrelated effort to show ‘bias,’ Bruya asserts that there is a category of philosophers who are ‘Methodological Continentalists’ (665-666) which would encompass programs like DePaul, Duquesne, and Emory (666). SPEP folks have long maintained, of course, that they are insulated from normal standards of philosophical scholarship because there’s something putatively distinct about the work they do that makes such standards irrelevant. Bruya is entitled to endorse that myth. But what is, again, pure fabrication is to assert that ‘Leiter refers to this brand of philosophy in his own published work,’ noting the introduction to The Oxford Handbook of Continental Philosophy I wrote with Michael Rosen (666). Bruya gives no page reference, because there is none that would show that we recognize something called ‘Methodological Continentalists’ represented by departments like DePaul, Duquesne, and Emory. How such a naked fabrication got through the peer review process is, again, mysterious.”

Reality 8: Mea culpa. Should have given page numbers. Here they are: pp. 2-4. And here is an actual quote from my critique that is more relevant to my point than the presence or absence of page numbers:

“In contrast to Leiter’s definition of ‘Continental philosophy’ in the PGR, Michael Rosen (1998), coeditor with Leiter of the Oxford Handbook of Continental Philosophy (2007), describes the tradition explicitly in terms of methodology. In the first few pages of Rosen 1998, he highlights four of what he calls ‘recurrent issues’ that define the field, each of which has a core methodological component: (1) the method of philosophy; (2) the limits of science and reason; (3) the influence of historical change on philosophy; and (4) the unity of theory and practice. A quote from Leiter and Rosen’s Introduction to their hand- book states the point clearly: “Where most of the Continental traditions differ is in their attitude towards science and scientific methods. While forms of philosophical naturalism have been dominant in Anglophone [Analytic] philosophy, the vast majority of authors within the Continental traditions insist on the distinctiveness of philosophical methods and their priority to those of natural sciences’ (2007, 4). This is in contrast to Analytic philosophy, which often sees its methods as consistent with, and on the same level as, those of the natural sciences.” (p. 665-666, n. 13)

Conclusion 8: Appearance of shoddiness is factually incorrect. The lack of a page number does not equal lack of evidence. The evidence is there and more to boot. Further, Leiter uses the rhetorical device of inflammatory language to divert attention away from the actual argument being made. First, he says that I “endorse a myth” in reference to another group of people. I make no reference to such a group, and Leiter provides no evidence of any link between what I claim and what they claim. Second, he claims that my citation is a “naked fabrication,” which while being factually incorrect also inflames the reader to indignation and away from the actual arguments made in the article.

 

Appearance 9: “Bruya repeatedly misrepresents Kieran Healy’s research about the PGR; readers interested in Prof. Healy’s views can start here [link provided].”

Reality 9: ” Kieran Healy (2012a) did an analysis of the overall ranking of programs by breaking the evaluators into categories according to specialty. He found wide variation from one specialty to another in their rankings for most of the programs. See his third and fourth figures.” (p. 673, n. 20)

” Healy (2012b) presents an instructive way to visualize this for the 2006 PGR, categorizing the various specialties and areas into twelve what he calls “specialty areas.” (p. 675, n. 22)

“This can be seen clearly in Healy’s (2012b) visualization for the 2006 PGR mentioned in the previous footnote. Each program is represented by a variable-size pie chart, with each wedge representing a category of philosophy (groupings of the thirty-three PGR specialties). Five wedges represent M&E specialties, five history, and two value. Scanning the programs from top to bottom in the figure, at least four out of the five M&E wedges for the top programs are near the maximum size, until one gets to #10 (not counting numerical ties), Harvard. The most revealing is Australian National University (ANU; the PGR has an international ranking as well as a national ranking), which ranks above Harvard, and has sizable wedges in the five M&E categories, sizable wedges in ethics and political philosophy, and no visible wedges at all in the five history categories—proof that one can do well in the rankings relying on M&E and absent history. Georgetown is nearly a mirror image of ANU, with particular strengths in four of the five history categories, along with ethics and political philosophy, but weak in all five M&E categories. Georgetown winds up much farther down the list—# 57 (again, not counting ties)—evidence that one cannot do well in the rankings without strengths in M&E, and evidence that strengths in history guarantee nothing. Healy comments, “MIT and ANU had the narrowest range, relatively speaking, but their strength was concentrated in the areas that are strongly associated with overall reputation—in par- ticular, Metaphysics, Epistemology, Language, and Philosophy of Mind.” (p. 678, n. 23)

Conclusion 9: Appearance of shoddiness is unsubstantiated. Leiter makes the accusation that I repeatedly misrepresent Healy’s research without substantiating his claim. Referring the reader to another webpage which also does not address the issue does not amount to substantiation of such an inflammatory claim. Again, this kind of rhetorical misdirection diverts attention away from the arguments made in the article, attempting to also reduce the reader’s opinion of the original author by suggesting that the reader need not even continue pursuing the subject.

 

Appearance 10: “Bruya’s main methodological suggestion (681 ff.) is to aggregate scores in the specialty areas for overall rankings. The Advisory Board discussed this in past years. Since there is no way to assign weights to the specialty areas that would not be hugely controversial and indefensible, the PGR has never adopted such an approach. Bruya has no real solution to the problem (though one may rest assured Chinese Philosophy will count for more!).”

Reality 10: “There are any number of ways that the specialty rankings could be aggregated, but the obvious and simplest way would be to simply sum them for each program.” (p. 681)

“It has been demonstrated above that the PGR uses the dubious method of asking experts in narrow fields to evaluate the overall quality of programs. As I mentioned, there is a valid way to aggregate such information, which is to take the specialty scores that are done individually for each program by small panels of experts within specific specialties and then simply add them up. The program that scores highest for the sum of all ranked specialties gets the highest overall score.18 I undertook such a mathematical aggregation, taking all the specialty scores for each program, as provided by the PGR, summing them for each program, and then ranking the programs accord- ingly. The difference between the overall ranking and the mathematically aggregated ranking is quite large, with the average change in rank being four spots (Table 2).” (p. 671)

“If one were to object and say, well, the mathematical aggregation is so crude, it doesn’t account for the size of departments, for focused strengths, and so on. Well, neither does the overall ranking, which has no modalities at all and is just a black box that spits out a number with no rhyme or reason.” (673)

Conclusion 10: Appearance of shoddiness is factually incorrect. Again, Leiter ignores what is plainly in my argument: simply sum the specialty scores, no weighting necessary. There is more that can be said about this process, and I address more subtleties in my argument. In other words, I do provide a real solution. It’s just that Leiter pretends it isn’t there and attempts to divert attention away from it. Further, in saying that this is my “main methodological” suggestion, Leiter implies that this is the only one that really matters. This one matters and so do the other four, all of which Leiter ignores.

 

Appearance 11: “I predict, with confidence, that no changes to the PGR methodology are likely to result from this very confused critique. I also trust the journal Metaphilosophy will withdraw the article in its entirety given the fabrications, and subject a revised article to a more serious peer-review process, to insure the final version is not so obviously shoddy.”

Reality 11: [No argument is given, so no evidence from article can be offered in response, except for the article in its entirety.]

Conclusion 11: Appearance of shoddiness unsubstantiated. This is another inflammatory claim for rhetorical effect. The claim that the article is “confused” suggests that it offers nothing of worth and is not worth wasting even a moment reading. The feigned confidence gives the reader the impression that Leiter is an authority on the matter and so need not be questioned.

 

Overall conclusion. The appearance of shoddiness of my critique of the PGR is entirely unsubstantiated and uses the rhetorical device of inflammatory language to divert attention away from the arguments made in the article. Not one single argument receives serious attention.

(image: detail of Frank Stella, “Nunca Pasa Nada”)

Stella

 

 

 

UPDATE (12/17/15): David Wallace (Oxford) presents a critique of Bruya’s arguments here.

Warwick University MA in Philosophy
Subscribe
Notify of
guest

32 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
anon grad
anon grad
8 years ago

Having followed a bit of the exchanges about the PGR and methodology, and understanding a bit of phil sci and statistical methodology, I’m inclined to find this recent paper very helpful. That said, I find Appearance 10 and Reality 10 a bit puzzling. The objection appears to be to the methodology of ranking that is proposed. The response is to appeal to summation, and then to claim: “If one were to object and say, well, the mathematical aggregation is so crude, it doesn’t account for the size of departments, for focused strengths, and so on. Well, neither does the overall ranking, which has no modalities at all and is just a black box that spits out a number with no rhyme or reason” (673).
Pointing out that Leiter’s methodology is flawed doesn’t do much to salvage aggregation as the proper approach. Indeed, presuming arbitrariness about how the categories of specialities are selected, simple summation will build this arbitrariness into the outcomes. It seems to me that genuine weighted rankings could reasonably be argued about and that this is a standardly done in empirically informed value theory as well as in social science.
Now, getting the rankings right is a question about value theory, but that can’t be settled by mere issues in emipirical methodolgy, so hard arguments are going to have to be made about how to do that, and I don’t have suggestions here.

One other complaint. I’m not sure that framing debates in terms of “appearances” and “realities” in which every reality happens to align with one’s on view–sometimes about debatable issues (rather than ones where one is direcatly misrepresented (as Leiter appears to have done frequently)) avoids “mere rhectoric[].” I think this lends the air of “pro-PGR” v. “anti-PGR” gamesmanship to a lot of these debates.

David Wallace
David Wallace
8 years ago

I started to write a blog comment on my problems with the quantitative methodology of the paper, but it got out of hand. So here’s a short note: https://dl.dropboxusercontent.com/u/8561203/bruya%20critique.pdf

The conclusion:
A large fraction of the “data-driven” part of Bruya’s paper is open to severe criticism on methodological grounds, quite apart from one’s assessment of the more qualitative issues. The severity of the criticisms are such that it’s very hard to see the paper passing peer review in any journal of the quantitative sciences. Indeed, the first and most severe criticism in this note – the silence, in the main part of the paper, about the reclassification of the PGR “Science” category as M&E – would in other contexts be troublingly close to academic malpractice.

I don’t use that term lightly and I don’t intend any accusation of malice to Professor Bruya. But as a discipline Philosophy needs to be extremely concerned if it allows publication of material that uses the methods of other academic disciplines but which fails to pass the basic methodological standards of those disciplines

ejrd
ejrd
Reply to  David Wallace
8 years ago

David, are you calling for a retraction of the article from the board at Metaphilosophy? Your claims are strong enough to me to seem, if true, to warrant such a move. However, I guess I want to know what you think should be done (if you are convinced that the paper represents gross negligence). I’m not convinced by your argument but am open to hearing more.

David Wallace
David Wallace
Reply to  ejrd
8 years ago

“Gross negligence” is more morally loaded than I’d want. I’m not accusing Professor Bruya of bad faith, and philosophy lacks the disciplinary norms of data-driven subjects, which means that the legitimate expectations on researchers are less clear. But I do think something odd must have happened at the journal, and I’m concerned that philosophy doesn’t always have good practices when it comes to making sure technical methods from other disciplines get properly peer-reviewed by people with the relevant skill set.

Am I calling for a retraction? That sounds too confrontational. I’m pointing out some severe methodological problems in a published paper in the hope that the reasonable and professional people involved with the paper’s publication (not least its author) will act sensibly in light of those problems.

Matt
Reply to  David Wallace
8 years ago

That’s really helpful, David, and seems to me to be a pretty seriously damning analysis. Thanks for posting it.

WP
WP
Reply to  David Wallace
8 years ago

Thanks for posting this, it’s really helpful.

Looking back at the paper, it really does give the impression that the areas discussed are the ones used by the PGR—the ‘Area Dilution’ section begins with “Consider the areas listed by the evaluators as their own areas of research…” That’s really unfortunate.

As far as the ‘area dilution’ point goes, I do think he’s right that he could have made the same point without merging the areas. Bruya’s goal is to look at whether people working in areas with certain methodological norms are significantly influencing the evaluations of areas with different methodological norms, and it seems true that there are significant norms shared by “M&E” (metaphysics, epistemology, language, mind, philosophical logic, action, and religion) and “Science” (decision, rational choice, and game theory; mathematical logic; general phil science; phil of physics; phil of biology; phil of social science; phil of cog sci; phil of math) that aren’t shared by value or history. I wouldn’t expect someone working in both epistemology and decision theory—or mind and phil of cog sci, or philosophical logic and mathematical logic—to be importing a distinctive approach to one of their fields in the way someone working in both M&E and ethics might.

I think you may somewhat misrepresent Bruya’s goal with the aggregate rank comparison. He says: “Still, let me state clearly what is wrong. The rankings of the PGR give the illusion of a kind of numerical precision, an empirical toehold in a subjective world of judgment. But is MIT ranked seventh or fourteenth? Is Boston University thirty-seventh or forty-fourth? Is UCLA eleventh or nineteenth? Is the University of Pennsylvania twenty-third or twenty-ninth? Is Notre Dame eighteenth or fourth? … If one were to object and say, well, the mathematical aggregation is so crude, it doesn’t account for the size of departments, for focused strengths, and so on. Well, neither does the overall ranking, which has no modalities at all and is just a black box that spits out a number with no rhyme or reason.” (672)

“The PGR rank isn’t any *more* methodologically sound” does seem like a legitimate response if the point is to show that a equally valid way we might get final scores gives us different results, with a particular effect on certain subdisciplines. I think the way students commonly interpret the PGR, unfortunately, does treat a difference of 4 ranks as quite significant. It does seem like different ways of aggregating (that look more like what evaluators might have in mind) would better serve this point.

It would be interesting to ask evaluators to assign weights that different specialties should contribute to total rank and then do aggregate ranks based on the average (?) weights. My dream is to see rankings with sliders that let users change the weight given to different factors. I think anyone offering rankings has a duty to try to help students see how much they can be effected by weighing things differently.

In general, though, it seems like Metaphilosophy wasn’t prepared to find a qualified reviewer for this paper. I’m bummed.

Mitchell Aboulafia
Reply to  David Wallace
8 years ago

I believe that “Nameless Grad” is correct: the current specialties debate can actually sidetrack us from more basic flaws in the PGR. Nevertheless, I can’t let Wallace’s claim that the PGR tracks the PDC data “moderately well” go unchallenged.

First, based on Wallace’s own chart, using Philosophy Documentation Center data, 30% of philosophers are in Value areas, while only 18% of the PGR categories are in this area. On the other hand, 9% of the PDC philosophers are in Science, while 24% of the PGR categories are in Science. This does not strike me as tracking the demographics “moderately well.” But notice that Wallace is using the number of philosophers in various specialties for the PDC data, but for the PGR he is using the percent of categories in the PGR. If we look at the number of evaluators in certain areas, and not only the categories, it’s clear that there are significant discrepancies between the PGR and PDC data.

Using Leiter’s own categories, the number of evaluators in the 2014-2015 PGR included under M & E is 32.7% of all evaluators. For Philosophy of Science and Math it is 21.9%. This gives us a grand total of almost 55% in these areas. The PDC figures suggest that 36% of philosophers specialize in these areas. (Granted, we don’t know how accurate this data is, or whether all of the categories match up, but Wallace was prepared to use it. I’m following his lead.) That’s almost a 20% spread, with most of the difference being due to the overrepresentation of Science. Value has about 22% of the evaluators, but according to PDC, it has about 30% of the philosophers. It’s worth noting that the category of “Other,” which includes Chinese Philosophy and Race, has 1.3% of all evaluators.* Continental Philosophy has 4.6% and American Pragmatism 0.5%. * * When you consider that the pool of specialty evaluators appears to be basically the same as the pool for those doing the overall rankings, you start to see why people in certain areas and traditions may feel that the game is rigged. And keep in mind how many departments evaluators rate. According to Kieran Healy, in 2006, “the median respondent rated 77 departments and almost forty percent of raters assigned scores to 90 or more departments of the 99 in the survey.” Really, 40% of the evaluators felt they could rate 90 departments! So many Renaissance folks.)

Speaking of the game being rigged, here is something that should make us suspicious: the way the results of the most recent PGRs track the PGR that Leiter did all by himself way back in 1995-1996. I will quote a passage here from a post I wrote on the topic, which I stand by. It’s worth noting that Leiter called his 1995-1996 PGR, “A Ranking of U.S. Graduate Programs in Analytic Philosophy.” Why isn’t this still in the title?

“Leiter’s descriptions of the differences among the top thirty schools in the 1995-1996 PGR do little to distinguish them. (I say “thirty” because of the way that Leiter divided the 1995-1996 rankings. . . . ) Further, the players among the top thirty schools haven’t changed much in almost twenty years. Specifically, 25 of the 30 schools in the top 30 in 1995 are still in the top 30 in 2014, and two of the other five are tied at 31. Perhaps this is because Leiter picked the majority of the top 30 in 1995-1996 correctly. Or perhaps it is because the PGR’s deep methodological flaws and Leiter’s faulty, or at least debatable, assumptions about philosophy have sustained a convergence of results from one edition to the next. Or perhaps the PGR has gotten it mostly right by chance. We don’t know, and that’s the point. But I haven’t heard any commitment from the Advisory Board–or the new co-editor–to engage a team of independent survey experts to address these questions once and for all. Without this, I don’t see how there will be any change in grounds for confidence in Leiter’s methodology.” “Brian Leiter’s Continuing Influence on the Philosophical Gourmet Report: The Past as Future” http://upnight.com/2014/12/30/brian-leiters-continuing-influence-on-the-philosophical-gourmet-report-the-past-as-future/

* This leaves out Feminist Philosophy, which ended up using evaluations from 2011, in addition to 2014, because there weren’t enough evaluators in 2014. In other words, we don’t know how many women actually evaluated in this specialty in 2014.

* * NOTE: Evaluators often evaluate in more than one category, complicating the picture about how many different individuals are actually involved. So, for example, the total number of different women involved in the 2014-2015 PGR, excluding Feminist Philosophy (where we can’t tell), was 32, although it looks like more because some people rated multiple times.

David Wallace
David Wallace
Reply to  Mitchell Aboulafia
8 years ago

I think you’re quoting me out of context. Here’s the full context of the quote:

“As it happens, the PGR categories track the demographics moderately well, much better than the APA categories (not that either is intended to track the demographics, and not that the fine details of the PDC-derived demographics should be taken too seriously).”

Mitchell Aboulafia
Reply to  David Wallace
8 years ago

I did not quote you out of context. As a matter of fact, by quoting only the one sentence you are misleading those who haven’t seen your piece. Your one sentence quotation is preceded by a chart that contains the information listed below. (I can’t reproduce the chart here.) Really, if you thought the data was so useless, why did you bother to create a chart to make it appear that there is in fact an alignment between the PGR and PDC? You can’t have it both ways. Either they should or should not be compared. Which is it? (Mentioning “fine details” doesn’t provide enough wiggle room once you center stage the chart.) Here is the information in Wallace’s chart:

On that basis (excluding the catch-all category of “modern philosophy”) we get the following:
Category
% of AOS in PDC data
% of PGR categories
% of APA categories
M&E
27%
21%
Est. 9%2
Value
30%
18%
18%
Science
9%
24%
Est. 9%
History
27%
27%
33%
Other
8%
9%
9%

David Wallace
David Wallace
Reply to  Mitchell Aboulafia
8 years ago

Why did I do this? Bruya claims that (a) PGR categories ought to track the demographics of the discipline; (b) actually the demographics of the discipline will be given by the APA categories. But there is no a priori reason to expect either to track the demographics, and as it happens (and somewhat ironically) we have reason to think the PGR categories fit a bit better than the APA categories.

I’ll leave others to judge whether your omitting my comparison with APA, my explicit disclaimer that neither categorisation ought to be expected to have anything to do with demographics anyway, and my explicit reminder about the crudeness of even PDC as demographics, omits relevant context or not.

David Wallace
David Wallace
Reply to  David Wallace
8 years ago

Actually, I notice from your copy of my table that there’s a typo in the bottom right corner of that table. It records the APA estimate of the “other” category as 9% of the profession, whereas it should be 30% (i.e., Bruya’s use of the APA category dramatically overestimates the number of people working in that category, but this got left off the chart, so that the chart understates how bad the fit is between APA and PDC). Apologies for that; fixed on the download.

Just for interest, I also worked out the correlation coefficients between PDC estimates for the demographics of the profession, and the estimates you’d get if you (were unwise enough to) use either (a) the PGR categorisation, or (b) the APA categorisation, as an estimate of the demographics.

PDC-PGR correlation: +0.82
PDC-APA correlation: -0.01

Actually, that’s a surprisingly high PDC-PGR correlation given that the PGR categories aren’t intended to be a close match to demographics and given the crudeness of the PDC metric. By contrast, the PDC and APA numbers are completely uncorrelated.

I wondered if this was unfair, since it uses my rather arbitrary estimate of how to break Bruya’s amalgamated M&E + science category down into parts for the APA categories (I don’t have Bruya’s raw data). When I amalgamate the two categories again, the PDC-PGR correlation improves to +0.90. The PDC-APA correlation is now -0.62 – i.e., nominally a quite good *negative* correlation, so that large numbers of subject groupings in the APA categorisation is predictive of a small subject group. But don’t take this very seriously with so few data points. (I think it’s probably driven mostly by the massive overestimation of the “other” category in the APA data relative to the PDC data.)

None of this is of any very deep significance (and I haven’t included it in the note). But perhaps it will serve as a more substantial response to Mitchell Aboulafia’s concerns. If a correlation of +0.8-+0.9 between PDC and PGR doesn’t count as tracking the PDC results “moderately well”, then hey, tough crowd.

Mitchell Aboulafia
Reply to  David Wallace
8 years ago

I would be satisfied, even if I qualified as a tough crowd all by my lonesome, with a correlation of this sort, assuming it was correlating anything of consequence. But it seems that we are all agreed that it’s not. There is no point in comparing the percent of PGR categories falling under a certain rubric, with the percent of philosophers working in the rubric, for the obvious reason that the number of categories is a relatively arbitrary figure. We could modify the percentages in the PGR by adding or subtracting categories, but it wouldn’t tell us how many philosophers are actually working in these areas. For example, in terms of the number of categories, M & E is currently 21% of the PGR. But in terms of the number of evaluators, it is 33%. This is a point I wanted to highlight.

I started my comment by saying that the demographics did not strike me as tracking the PGR categories “relatively well,” using the data provided in Wallace’s chart. (The chart still suggests problems in certain areas.) But I then quickly segued into the more important point. Here is the relevant paragraph.

“First, based on Wallace’s own chart, using Philosophy Documentation Center data, 30% of philosophers are in Value areas, while only 18% of the PGR categories are in this area. On the other hand, 9% of the PDC philosophers are in Science, while 24% of the PGR categories are in Science. This does not strike me as tracking the demographics “moderately well.” But notice that Wallace is using the number of philosophers in various specialties for the PDC data, but for the PGR he is using the percent of categories in the PGR. If we look at the number of evaluators in certain areas, and not only the categories, it’s clear that there are significant discrepancies between the PGR and PDC data.”

I then spent rest of the comment on the latter issue, as well as other related issues, for example, the implications of having so many evaluators in certain specializations for the reliability of the overall rankings, given the overlap in the pools of evaluators. Now Wallace can say that this is besides the point. But it isn’t. He raised the issue of the number of philosophers in various areas by citing the PDC figures. And he did a bit of the apples and oranges thing by comparing percentages of categories to percentages of philosophers. I was saying, ok, if we are going to compare, let’s at least compare numbers of philosophers to numbers of philosophers, in this case the percent of evaluators in different areas with the percent of philosophers that the PDC figures show.

But there is an even a more important reason why I don’t think Wallace is in a position to dismiss the rest of my comment. He and Leiter have been up in arms about possible methodological problems in Bruya’s paper. (Oh, how terrible! A journal published it.) Leiter is ecstatic, acting is if somehow countering Bruya makes everything ok in PGRland. He can’t stop referring to Wallace and quoting his paper. But what about all of the years that the PGR has dominated the profession’s self-understanding, when it is in fact a methodological nightmare? That would seem to be a much larger problem.

But you say, that’s just your opinion, Aboulafia. We don’t see it that way. Ok, here’s my response.

I will bet $1,000. that an independent panel of survey experts and statisticians, drawn from outside of philosophy, would find that the PGR is not only unscientific, but not a reliable indicator of the quality of philosophy departments. To my knowledge, Leiter has never been willing to submit the PGR to such scrutiny. Are you, David? It’s time.

David Wallace
David Wallace
Reply to  David Wallace
8 years ago

@Mitchell Aboulafia: the topic of this thread is specifically Brian Bruya’s paper. Criticising that paper doesn’t somehow obligate me to defend the PGR against any other objection that anyone else makes.

David Wallace
David Wallace
Reply to  David Wallace
8 years ago

With apologies for double-posting, I missed this fairly extraordinary bit in Mitchell Aboulafia’s reply:

“he [Wallace] did a bit of the apples and oranges thing by comparing percentages of categories to percentages of philosophers.”

The reason I did that is because *that is Bruya’s methodology* and I was commenting on Bruya’s paper. Turning my *critique* of that methodology into a criticism of *me* is bizarre. Yes, it’s an apples-to-oranges comparison! That’s my main point about it! It’s one of the “possible methodological criticisms”, as Prof. Aboulafia puts it, of Bruya’s paper – though I notice that to Prof. Aboulafia it’s only a *possible* criticism in Bruya’s paper itself.

It is getting really difficult for me to believe that Prof. Aboulafia is conducting this conversation in good faith.

Mitchell Aboulafia
Reply to  David Wallace
8 years ago

I guess this means that you won’t be taking me up on my bet regarding how an outside panel of survey experts and statisticians would evaluate the PGR. (Perhaps it’s better to keep up appearances about the PGR than to submit it to expert scrutiny.)

To your first point: This is not a thread responding directly to Bruya’s paper. This is a thread attached to Bruya’s response to criticisms of his paper, in which he covers considerable territory, as do the commentators, e.g., the issue of placement has come up, as well as an alternative ranking system. You don’t get to set the terms of the discussion because your interest happens to center on Bruya’s original paper, specifically, criticizing it.

Second: You are not obligated to defend the PGR against “any other objection that anyone else makes.” You are also not obligated to defend the PGR against Bruya. But it seems important to you. Given your enthusiastic support for the PGR, in this and other venues, it seems reasonable to ask you to defend the PGR against objections, not by anyone in this case, but by me, in the context of this discussion.

Third: It’s inappropriate to question the integrity of an interlocutor, in this case wondering if I am “conducting this conversation in good faith,” as opposed to focusing on what he or she is saying. It’s a way of dismissing him or her. Readers should decide if the interlocutors are operating in good faith.

Lastly, we are going to have to agree to disagree about whether your caveats or concerns about the data that we have been discussing trump your use of them. I can see that you are frustrated. The section of your response to Bruya that we have been discussing was ill-conceived, and I keep raising issues about it. I will try to explain my position one more time. (Don’t worry, I won’t suggest that you are failing to operate in good faith if you reject my interpretation.)

Whatever your criticisms are of Bruya, you produced a chart with commentary, in which you were willing to use rough data to try to show that even on the demographics question the PGR is superior to the APA, that “the PGR categories track the demographics moderately well, much better than the APA categories.” Let’s not forget your set up. After criticizing Bruya for appealing to APA specialties, you say:

“The APA division strikes me as so unreliable a guide to the demographics of the profession (does Bruya really think as many people do philosophy of biology as do Ethics? That only one philosopher in 60 has a Metaphysics primary AOS?) that if it was the best we could do, we’d just have to accept that we can’t know the demographics of the profession. But in fact, we can do better.”

Clearly you thought that however rough the data was, it was good enough to make a comparison between the APA, PDC, and PGR, and to conclude that the PGR comes out ahead of the APA. (Although they never were intended to do X, the PGR still does X better than the APA.) Here again are your own words after the chart.

“As it happens, the PGR categories track the demographics moderately well, much better than the APA categories (not that either is intended to track the demographics, and not that the fine details of the PDC-derived demographics should be taken too seriously).”

The fact that you ultimately believe that this comparison isn’t of much value didn’t stop you from making it, and creating a chart to assist. I have to conclude from this that in addition to criticizing Bruya, which could easily have been done from your perspective without all of the comparison baggage, there must have been a reason for the comparison, otherwise you could have just criticized his account. I suspect from your language that you were having some fun at the APA’s expense: Oh, look, the PGR is even better at this. Your relatively long comment in this thread showing that there is a significant correlation between the PGR and the PDC’s figures, and that “by contrast, the PDC and APA numbers are completely uncorrelated,” lends weight to this interpretation.

But whether this interpretation is correct regarding your motives, the section of your paper stands. Readers can decide whether it is something of a muddle or a paragon of critique.

David Wallace
David Wallace
Reply to  Mitchell Aboulafia
8 years ago

@Mitchell Aboulafia:

“Readers should decide if the interlocutors are operating in good faith. ”

Quite so. Nothing to add beyond that.

Mitchell Aboulafia
Reply to  David Wallace
8 years ago

And readers will also have to decide whether your responses have been more convenient than substantial.

anon
anon
8 years ago

I find David Wallace’s criticisms to be fairly damning. Perhaps he could be given his own multi-part guest post on the front page?

Chris Surprenant
8 years ago

Why are we still talking about all of this? Didn’t everyone promise to stop complaining about the PGR once Brian stepped down?

Jonathan Ichikawa
Reply to  Chris Surprenant
8 years ago

No?

A bunch of us said we’d refuse to contribute volunteer work to the PGR until Leiter stepped down.

Chris Surprenant
Reply to  Jonathan Ichikawa
8 years ago

So what? The vast majority of the people who refused to contribute volunteer work to the PGR never would have been asked to contribute in the first place.
I’m not going to get into an internet fight about this with someone who I know dislikes Prof. Leiter (perhaps rightly so, I don’t know and don’t really care). If folks really wanted to “undermine the PGR” and “reduce its influence,” you’d just create an alternative ranking system, you’d just create an alternative ranking system, with an alternative methodology, that gets endorsed by the appropriate people (volume, individual prestige, folks associated with “big name” schools, whatever), and then promote that alternative system (e.g., high traffic website).
Prof. Jennings tried to do this, but rightly gave up the project (or put in on hold) given her status as an assistant professor and her need to focus on work that counts for tenure. But if undermining the PGR is such an important issue, why hasn’t someone with a bit more stature and institutional protection taken her data, added to it, and then generated a ranking system based on job placement? Similar to the PGR, the value isn’t in the raw data but in how it is compiled. How do you value 2/2 R1 appointments compared to 3/3 SLACs or 4/4 positions? What about programs that place people into lucrative, but not academic fields? If a project like this were done well, people would very quickly forget about the PGR–rightly or wrongly, far more people value job placement over a program’s “eliteness.” (And I say this as someone who thinks that the PGR was and still is an asset to academic philosophy, and am appreciative for all of the work that Prof. Leiter did to put it together.)

Carolyn Dicey Jennings
Carolyn Dicey Jennings
Reply to  Chris Surprenant
8 years ago

Just FYI, APDA is still happening. We are currently checking the data for each program, which is time consuming. We will aim to release program specific placement rates in the next couple of months or so.

Dale Miller
8 years ago

I’m not quite following appearance/reality/conclusion 3. It appears that Leiter made a claim about how many philosophers work in particular specialty areas and that the response is a claim about how many sub-specialties different specialty areas contain. Given that some sub-specialties will have vast numbers of practitioners and some very few, the response doesn’t seem to contradict the “appearance.” The 95% number may have been pulled out of the air, and it may be dead wrong, but the fact that there are a lot of sub-specialties that fall outside of Leiter’s four divisions seems like at most fairly weak evidence that there are a lot of philosophers working outside of those divisions. Am I missing something?

Nameless Grad
Nameless Grad
8 years ago

It seems to me really unfortunate that the public discussion of this article has been swamped by the latter half on the over-representation of M&E (I agree that, at the very least, lumping Phil Sci in with M&E seems like a very strange and poorly informed choice). However, to my ear the really damning bit is this:

“Notice that the small group at the top right [of figure 1 in the article], bounded by Yale, Rutgers, Princeton, and Cornell, accounts for approximately half of all votes in the PGR. These eight programs in a tight geographical area are in effect driving the ratings of the PGR. Thus, the PGR is not a survey of philosophers generally about the quality of programs generally but a survey of a small, select group of programs about each other and about what they think of other Ph.D. programs.” (p. 663)

This seems like a clear and obvious confound—one that will surely function to reinforce existing power structures in the profession—that Leiter hardly addresses. I still have yet to see any reason why the sampling methods shouldn’t be completely overturned (besides Leiter’s reference to wanting “a kind of ‘insider’s’ knowledge” which perhaps highlights the problem more than anything).

Pursuing griping about Bruya’s discussion of the way in which specialties should be represented seems like a subterfuge intended to avoid addressing the real problem: the sampling method reliably functions to allow those with privilege to consolidate that privilege in the form of an “authoritative” ranking.

David Wallace
David Wallace
Reply to  Nameless Grad
8 years ago

I plead not guilty to subterfuge, but it’s true that my analysis didn’t pay any attention to the part of Bruya’s paper that you mention. That’s because I’d thought, on surface reading, that the methodology of that section was sound given the qualitative assumptions Bruya made, and I was engaging specifically with quantitative methodological errors.

That turns out to have been a mistake. On closer examination that section of Bruya’s paper is also seriously flawed methodologically. I’ve amended my note to discuss this and uploaded the new copy to the original link at https://dl.dropboxusercontent.com/u/8561203/bruya%20critique.pdf. (The old version is now at https://dl.dropboxusercontent.com/u/8561203/bruya%20critique%20v1.pdf .) The summary of the added part is:

• This section of Bruya’s paper is opaque about its methodology, making it easy for a reader to conflate two readings of an evaluator being “from” an institution: current affiliation, and PhD institution
• The quantitative statistical results Bruya points to rest mostly on the PhD-institution reading
• The arguments Bruya makes would only go through on the current-affiliation reading.

WP
WP
Reply to  David Wallace
8 years ago

Oh, this is important. I was really struck by this point, since I read it as current institution and thought that there shouldn’t be such a tight relationship between program quality and the number of faculty qualified to be evaluators, since most phd programs have active researchers. But, as you say, we would expect a close relationship between program quality and the number of students that go on to be active researchers. Thanks for your work on this.

JDRox
JDRox
8 years ago

“it seems true that there are significant norms shared by “M&E” (metaphysics, epistemology, language, mind, philosophical logic, action, and religion) and “Science” (decision, rational choice, and game theory; mathematical logic; general phil science; phil of physics; phil of biology; phil of social science; phil of cog sci; phil of math) that aren’t shared by value or history.”

This claim is highly suspect, and it’s a big part of the problem with Bruya’s piece. Metaphysicians and epistemologists get along pretty well, methodologically speaking. But then, I think metaphysicians and ethicists get along pretty well, methodologically speaking. But there tends to be huge distrust of metaphysicians from philosophers of science, and many philosophers of science are fairly serious historians of science. If anything, I’d link traditional (general) philosophy of science with history rather than with metaphysics. And the specialized philosophies of science are their own thing–specifically, their work tends to be much more empirically informed than that of metaphysicians, epistemologists, or ethicists. If I had to divide areas of philosophy methodologically, this would be my first approximation:

Hard core a priori: metaphysics, epistemology, most ethics, some phil mind, some phil language.
Historial: history, some phil science, some ethics, some political philosophy, some continental.
Empirical: philosophy of biology/physics/etc, some phil mind, some phil language.
Formal: logic, phil logic, phil math, formal epistemology.
Continental: Phenomenology? Sorry, don’t know, I’m sure there are subspecies here too other than the historical stuff mentioned above.

If that is anything close to correct, Bruya’s grouping isn’t one of many good ways of grouping things, it really is sort of bizarre.

anon
anon
Reply to  JDRox
8 years ago

I agree that Bruya’s grouping “really is sort of bizarre.” So, perhaps I agree with what you’re interested in, and perhaps I even agree with the conditional at the end and its antecedent. But I do have some doubts about how ethics is getting sorted (though my own work in ethics I think is described well by this). I’m pretty sure that at least “some ethics”–and I mean a significant amount of–falls into every grouping other than formal (and some ethics falls there as well–particularly metaethical work in moral semantics). I suspect that those who feel that there work is in various ways marginalized might point to it as a major subdiscipline that cuts across this methodological approach (and related ones) to categorizing fields and further suggest that failure to appreciate the diversity of ethics is a sign of the problem that they object to.

anon
anon
Reply to  anon
8 years ago

their* mea culpa

Jamie Dreier
Jamie Dreier
Reply to  anon
8 years ago

I agree, anon, except that I think there might be more ethics (and not just metaethics) in the ‘formal’ category than you suspect. John Harsanyi, Hilary Greaves, Christian List, John Broome, among many others, have contributed a lot to ethics in their formal work.
But I mainly agree with you, and also with JDRox for that matter.

WP
WP
Reply to  JDRox
8 years ago

When I wrote that I was thinking we only had information about the broad areas evaluators were in (M&E, Science, Value, History, Other), in which case I do think it makes some sense to merge them, since the M&E/Science split seemed especially rough to me. Now that I’ve realized that the evaluators for each subarea are published, I agree that there seem to be much better options.

Anonymous
Anonymous
8 years ago

I don’t have much experience with the rankings, but I was hoping someone could clear up a question I have. I don’t get what they are ranking… is it the quality of the faculty as researchers, or of the program as an educational program, that is, as a sign of how much learning happens at these? The critique of US World reports on colleges shows that they overvalue the input (bright students, lots of money, smart faculty) and undervalue the educational experience of those who attend, and likely underrepresent the differences in the learning that happens at different places. These seem to me quite different… I suspect we all know brilliant researchers who are not very good teachers, and great teachers who are not first rate researchers.