Appearance and Reality, Take 2 (guest post by Brian Bruya)

Appearance and Reality, Take 2 (guest post by Brian Bruya)

Yesterday’s post, “A Detailed Critique of the Philosophical Gourmet Report,” contained excerpts from “Appearance and Reality in The Philosophical Gourmet Report: Why the Discrepancy Matters to the Profession of Philosophy,” an article in Metaphilosophy by Brian Bruya (Eastern Michigan) in which various criticism of the PGR were summarized. As noted in an update to the post today, Brian Leiter (Chicago), creator and former editor of the PGR, posted a response to Professor Bruya at his blog. Professor Bruya has authored a patient and measured response to Professor Leiter’s post, which appears below as a guest post*.

Appearance and Reality in Brian Leiter’s Attempted Refutation of my Critique
by Brian Bruya

Before I begin, I’d like to make it clear where my critique of the PGR is coming from. Leiter implies, as I note below, that I know very little about Analytic philosophy, and I fear that my defense of Continental philosophy may cause some readers to think that I have some kind of axe to grind against Analytic philosophy in favor of Continental philosophy. I’d like to make it clear that this is an internal critique. I do philosophy in the Analytic style and prefer it (but not to the extent that I think other ways of doing philosophy should be excluded). I was trained as an undergraduate at the University of Washington, which was then (and I think still is today) thoroughly Analytic. A significant difference between then and now at UW is that I happened to attend during a period when they had people also doing comparative work. Karl Potter (Ph.D. Harvard) did Indian philosophy, Arindam Chakrabarti (Ph.D. Oxford under Michael Dummett) mixed Indian philosophy into his straight Analytic perspective, Vrinda Dalmiya (Ph.D. Brown under Ernest Sosa) did something similar, and Chün-chieh Huang (Ph.D. University of Washington) did Chinese philosophy (actually, I think he falls more in the discipline of intellectual history, but his courses were in the philosophy program and he taught two courses as such).

There are many critics of Analytic philosophy from within Analytic philosophy, so my position as a critic is not unique. My perspective, however, may be unique in that I think that Analytic philosophy, in order to be as vibrant, thorough, and socially just as possible needs to widen its perspective into non-Western philosophy (in addition to other methodological and textual resources). I make arguments supporting this position in my article “The Tacit Rejection of Multiculturalism in American Philosophy Ph.D. Programs” (Dao, 14 (3)) and in my edited volume The Philosophical Challenge from China (MIT Press, 2015). The reason I wrote the critique of the PGR is that I see a group of very intelligent and apparently well-meaning people involved in an influential publication that ultimately locks out this wider perspective from the field.

I give long, well-supported arguments demonstrating that as it stands the PGR is exclusive and so damaging to the field. If anyone thinks my facts are wrong or an argument is flawed, please point out the error and state how it affects the overall conclusion of unwarranted exclusivity in the PGR. Leiter, as we’ll see, does not do this.

Or perhaps you think the PGR should remain exclusive. I’ve seen people purporting to be professors comment that they trust the opinions of the PGR evaluators over their own. That’s fine, but Clarence Thomas’ opposition to affirmative action is not a reason that all African Americans should also be opposed to it. If you think that the field of philosophy should exclude traditions representing the cultures of nearly half of all Americans and the vast majority of people around the world, make your argument—but do it in an informed way, by engaging evidence and arguments that are out there already.

I undertook this project because I have a deep love for the field of philosophy and a profound respect for its ability to provide important insights into all aspects of the human condition. It seems counterproductive to systematically exclude insights from the traditions of 5/6 of humanity.

I also should say that I don’t have a bone to pick with the PGR, or with Brian Leiter, or with anyone associated with the PGR. This project did not begin with the question, “How can I bring down the PGR?” Rather, it began with the question, “Why is multiculturalism losing ground in philosophy when it is gaining ground in the rest of the academy?” The question is perplexing on many levels, not the least of which is that from the perspective of contributing to the mission of the wider university in terms of building diversity, understanding globalization, and creating interdisciplinary relationships, it only makes sense that philosophy programs would be considering bringing in more non-Western philosophy.

I began by examining a decades worth of data from Jobs for Philosophers and interviewing programs that had hired in non-Western philosophy. That showed me that it is actually not true that multiculturalism is losing ground in philosophy programs because there are actually robust advertising and hiring numbers in non-Western philosophy, and programs are recognizing its value to the program and to the university. But it also showed that the place where multiculturalism is losing ground is in philosophy Ph.D. programs. So I inquired into hiring mechanisms in philosophy Ph.D. programs, interviewing department chairs in a variety of different departments, including ones that have people in non-Western philosophy, ones who used to have people in non-Western philosophy, and ones who don’t and haven’t. I was able to uncover no structural impediments to hiring in non-Western philosophy. In fact, I found some warmth toward the idea and some interesting recollections about scholars who had worked with specialists in non-Western philosophy.  Then I came across some claims that some departments appeal, officially or not, to the PGR when making hires and even tout their rank to administrations when it comes to seeking funding. So l turned my attention to the PGR. I have to admit that I was as surprised as anyone by what I found. I had trusted that these very intelligent people, many of whom I look up to and admire, had been using a sound methodology and that they had the best interests of the profession at heart. Because I still think the latter is true, I made the attempt at exposing the untruth of the former in the hopes of reform.


Now, if you were to write up a 34 page critique of an influential publication, creating a detailed, multi-layered but very clearly laid out argument, with 28 substantial footnotes, 38 supporting citations, and two appendices designed to further elaborate subtle points, the most devastating attack on your piece would be that someone would come along and show it to be shoddy work. This the approach that Brian Leiter attempts in critiquing my article. The problem is that it doesn’t actually hit home in any of its attacks.

In what follows, I relate the appearance of shoddiness that Leiter creates, then provide the reality from my article, followed by a summary conclusion for each. The question to ask for each of Leiter’s claims, just like you would advise your students to do in an introductory critical thinking course, is whether his claims are made in well-formed, non-fallacious, arguments and whether they provide evidence in the form of empirical support or citations from reliable sources. Let’s see how Professor Leiter does.


Appearance 1: “The article ignores the participation of the Advisory Board in producing the report for the last 15 years.” [This is a quote from Leiter’s attempted refutation. Each section below will follow this pattern.]

Reality 1: “Brian Leiter… handpicked his original slate of evaluators and has since asked them to recommend more.” (p. 657) [This is and what immediately follows are quotes from my original article to demonstrate the falsity of Leiter’s claim above. Each section below will follow this pattern as far as possible.]

“Leiter’s claim [in defending the PGR on his blog] is that one self-referred person is qualified to select a slate of referees and that a portion of that slate (the “Advisory Board”) then recommends other referees.” (p. 665)

Conclusion 1: Appearance of shoddiness is factually incorrect. I do not ignore the participation of the Advisory Board. Leiter seems to take exception to the fact that I target most of my arguments at him rather than at the PGR as a publication. This is because, as I state in the article, the “Methods and Criteria” section of the PGR is extremely thin, not even mentioning, for example, the snowball sampling method. For this information, and other defenses of the PGR, one must refer to Leiter’s personal blog, which is what I did (and am now doing again).


Appearance 2: “Bruya asserts (674), falsely, that the ‘Metaphysics & Epistemology’ (M&E) category in the PGR includes 15 sub-specialties, more than ‘Value Theory’ and ‘History of Philosophy’ together. In fact, the M&E category has only 7 specialties listed, compared to 6 for Value Theory and 9 for History of Philosophy.”

Reality 2: ” An area in the PGR is a general category under which various specialties are grouped. In his “Description of the Report” (2011b), Leiter allows seven distinct areas for evaluators: ‘Metaphysics and Epistemology,’ ‘Science,’ ‘History,’ ‘Value,’ ‘Logic,’ ‘Chinese Philosophy,’ and ‘Other.’ In his ‘Breakdown of Programs by Specialties,’ Leiter (2011e) lists five distinct areas for programs: ‘Metaphysics and Epistemology,’ ‘Philosophy of the Sciences and Mathematics,’ ‘Theory of Value,’ ‘History of Philosophy’ and ‘Other.’ For the purpose of statistical analysis of evaluator area and programs, I have standardized the areas, merging all into the following four PGR areas: metaphysics and epistemology (now including specialties in philosophy of science, mathematics, and logic, which, if not M&E specifically, are methodologically and topically closely allied), value, history, and other (including Chinese philosophy). One could argue that specialties in philosophy of science, mathematics, and logic should not fall under M&E. There is no reason to think, however, that logic, for example, should necessarily be grouped with general philosophy of science into a separate area. It is uncontroversial that many of the specialties of M&E, philosophy of science, philosophy of mathematics, and logic are core specialties of Analytic philosophy. Breaking them out into several more separate groups (as, for example, Kieran Healy [2012b] does) would not alter the conclusions of the arguments made in this critique.” (pp. 685-686)

Conclusion 2: The appearance of shoddiness is factually incorrect and distorts my actual argument. To restate: if one wants to provide a statistical analysis of the methods and methodology of the PGR, one first has to confront the fact that it uses two distinct regimes in categorizing specialties. In order to evaluate these two regimes in a unified way, one has to make a decision about how to unify them. One can create more areas or fewer. Healy created more, which is a legitimate move. I created fewer. But Healy’s conclusions and mine are essentially the same—that however you carve it up the core fields of Analytic philosophy receive a positive bias in the PGR. As I quote Healy in the article, “MIT and ANU had the narrowest range, relatively speaking, but their strength was concentrated in the areas that are strongly associated with overall reputation—in particular, Metaphysics, Epistemology, Language, and Philosophy of Mind.”


Appearance 3: These four divisions [Metaphysics & Epistemology; Philosophy of Science, Mathematics, and Logic; Value; and History] correspond quite well to the areas represented by about 95% of philosophers in the Anglophone world.

Reality 3: “It is worth comparing the PGR’s list of philosophical specialties to those put out in a survey from the American Philosophical Association (2013), the largest society of philosophers in the United States. As I’ve already remarked, the PGR has the following number of specialties in each area: M&E—15, value—6, history—9, other—3. The survey by the APA was sent out by the executive director (Amy Ferrer) following the Eastern Division annual meeting (the largest annual meeting of philosophers in the United States) in order to evaluate the success of the meeting and how welcoming the climate was for underrepresented groups. In the demographic section of the survey, sixty philosophical specialties are listed. Compare this to the PGR’s thirty-three and you begin to see indications of exclusivity in the PGR. Using the PGR’s own way of grouping specialties into areas, and standardized as described in Appendix 2, the APA’s grouping would look like this: M&E—11, value—11, history—20, other—18. The differences are dramatic. No longer is M&E the dominant area; instead, history and other dominate, while M&E and value are equally sized minorities.” (674-675, n. 21)

Conclusion 3: Appearance of shoddiness relies on a specific factual claim (the 95% claim) that is unsupported and ignores evidence in the article to the contrary. The unsupported claim diverts attention from the thrust of the argument—namely, that the PGR is excluding a large portion of the philosophical community.


Appearance 4: “Buried in an appendix at the end, Bruya finally acknowledges conflating the divisions, with the explanation that, ‘It is uncontroversial that many of the specialties of M&E, philosophy of mathematics, and logic are core specialties of Analytic philosophy’ (686). This is, of course, revealing about Bruya’s biases, and his lack of understanding of ‘Analytic’ philosophy (he might talk to some philosophers of physics and biology to find out what they think of a lot of, say, contemporary metaphysics).

Reality 4: [No critique of any part of the argument is offered by Leiter, so no quotation can be provided in response.]

Conclusion 4: Appearance of shoddiness is unsupported and insinuates that I lack a level of knowledge that is so common to the reader that Leiter need not even explain it. This is a poorly formed argument bordering on an ad hominem attack. We all have biases. This specious criticism from Leiter diverts attention from the fact that the PGR systematically excludes the opinions of philosophers whose biases differ from the biases built into the PGR.


Appearance 5: “Snowball or chain-sampling is a perfectly appropriate method of sampling when what you want is a kind of ‘insider’s’ knowledge.”

Reality 5: ” Drağan and Isaic-Maniu 2012 (provides extensive citations of studies that have used snowball sampling); Atkinson and Flint 2001 (explains that snowball sampling is used primarily for qualitative research [e.g., interviews] and for research on the sample population itself); Biernacki and Waldorf 1981 (provides a case study of snowball sampling and the methodological issues encountered); Erickson 1979 (discusses the benefits and limits of snowball sampling; distinguishes it from other chain sampling methods); Coleman 1958 (examines snowball sampling and networks). Reading these articles, one realizes that the PGR actually does not use chain-referral [snowball] sampling in the standard way. To imagine the use of chain-referral sampling in the standard way, you have to imagine a population hidden to you—you want to survey the members of the population, but you can’t find them. First, you find one or two, then you ask them to identify more, then you ask the new ones to identify more, and so on. Since philosophers are easy to find, the only way Leiter did anything like snowball sampling is if he looked for, as he says, ‘research-active’ philosophers (see my Appendix 1). He would have identified a few on his own (using what selection criteria, we can only guess), and then he would have asked those ‘research-active faculty’ to identify others (again, by unstated criteria), and so on. But are research-active philosophers really so hard to find? Of course not—they are, by definition, published. One could conclude that Leiter’s snowball is not about finding a hidden population but about excluding a large portion of an otherwise prominent population, as we shall see. For an excellent overview of purposive sampling as a technique, including numerous examples from prior literature, see Tongco 2007.” (p. 661, nn. 6-7)

Conclusion 5: Appearance of shoddiness is vague and off-target. What does Leiter mean by “insider” and why resort to some insiders while excluding others? I explain the scope of the appropriateness of the use of snowball sampling and explain in detailed arguments why it is inappropriate for attempting to make the general conclusions that are made in the PGR. Leiter ignores these arguments (except for the one just below).


Appearance 6: “Bruya’s argument against it is silly: ‘The reason [snowball sampling] is used is as an expedient way to access a hidden population, such as social deviants (drug users, pimps, and the like), populations with very rare characteristics (such as people with rare diseases, interests, or associations), or subsets of populations associated in idiosyncratic ways (such as networks of friendships)….Philosophers are neither social deviants nor difficult to find, as every philosophy program’s faculty list is public information.’ (660-661)”

Reality 6: [No argument is offered by Leiter, so no evidence from the article can be provided in response, except to restate the article itself.]

Conclusion 6: Appearance of shoddiness is unsupported. Again, there is an insinuation that Leiter’s claim is so obvious that it need not be argued for. Leiter offers no evidence to support his claim that a particular argument is silly, again diverting attention away from the argument and away from the counter-evidence provided in the article itself.


Appearance 7: “Later in the article, however, Bruya acknowledges that, ‘We want experts to provide their opinions when expertise is required for a sound assessment, and we would not insist on getting a representative sample of all such experts. We see this all the time in academia. We have PhD committees, tenure review committees, grant committees, and so on, which are formed for the purpose of providing expert evaluation. And for none of these do we insist on getting a representative sample. (667)’ Given this admission, one might wonder what all the fuss is about?”

Reality 7: “So, when the PGR draws up a slate of more than five hundred specialists, some three hundred of whom respond, why should we not consider it another example of an academic expert committee—a large and, seemingly, diverse one at that? We have already covered part of the reason—namely, the introduction of bias into the selection process. But why is risk of bias unacceptable in the PGR and not on committees that are so much smaller (and thus even more subject to bias)? First, we have to distinguish between the two different kinds of committee just mentioned. One was the medical-expert kind of committee that is evaluating empirical evidence to offer recommendations according to stipulated criteria. That, of course, is not happening in the case of the PGR. There are no stipulated criteria, so one cannot regard a committee, however large, as offering any sort of valid empirical evaluation. Thus, the PGR expert committee is not comparable to a medical-expert committee. The second kind of expert committee is the referee kind, which involves judging the academic merit of a scholar or a scholarly piece of work. We all know that such judgments are naturally biased and that a submission that is accepted by one journal or press could have been rejected by another of equal standing. The simple fact is that in the world of academic publishing there is no better alternative to this type of committee. One can’t send every article or book manuscript on epistemology to all, or even to a statistically significant random sample of all, working epistemologists. The logistics and the workload would be impossible. We rely, instead, on ad hoc arrangements as a necessary expedient. If we accept bias in academic committees because there is no better alternative, why not do the same for the PGR? The reason is that the logistics are entirely different. The PGR survey is undertaken only once every few years, and the online survey already exists. There is no practical impediment to moving to a valid sampling procedure. Perhaps that point came too quickly. The reason that the PGR should not use an ad hoc committee of expert evaluators, even though such committees are often used in academia, is that it does not need to. It could just as easily use a valid sampling procedure. Using a nonrepresentative sample and then generalizing from it is misleading. As quoted above, the PGR says: ‘This report ranks graduate programs primarily on the basis of the quality of faculty. In October 2011, we conducted an on-line survey of approximately 500 philosophers throughout the English-speaking world’ (Leiter 2011b). There is no reason for anyone reading this claim to suspect that the sample is not representative of the entire population of working philosophers or therefore to suspect that the conclusions drawn from the sample cannot be generalized across the entire population of philosophers. And yet such a supposition would be flatly wrong. One must again attend to the fact that the sample used by the PGR is as notable for those that it excludes as for those that it includes. The simplest thing for the PGR to do to improve its validity would be to open up the evaluation pool to anyone listed on a philosophy program’s faculty webpage. Given the electronic resources that Leiter has already mastered, getting the word out would be neither difficult nor time-consuming.” (667-668)

Conclusion 7: Appearance of shoddiness falls flat. The objection ignores the argument that answers the very question it asks, again diverting attention away from the point of the argument itself. The fuss is about the systematic exclusion of 99.5% of all philosophers from participation in the PGR (see article for substantiation of this statistic).


Appearance 8: “In an unrelated effort to show ‘bias,’ Bruya asserts that there is a category of philosophers who are ‘Methodological Continentalists’ (665-666) which would encompass programs like DePaul, Duquesne, and Emory (666). SPEP folks have long maintained, of course, that they are insulated from normal standards of philosophical scholarship because there’s something putatively distinct about the work they do that makes such standards irrelevant. Bruya is entitled to endorse that myth. But what is, again, pure fabrication is to assert that ‘Leiter refers to this brand of philosophy in his own published work,’ noting the introduction to The Oxford Handbook of Continental Philosophy I wrote with Michael Rosen (666). Bruya gives no page reference, because there is none that would show that we recognize something called ‘Methodological Continentalists’ represented by departments like DePaul, Duquesne, and Emory. How such a naked fabrication got through the peer review process is, again, mysterious.”

Reality 8: Mea culpa. Should have given page numbers. Here they are: pp. 2-4. And here is an actual quote from my critique that is more relevant to my point than the presence or absence of page numbers:

“In contrast to Leiter’s definition of ‘Continental philosophy’ in the PGR, Michael Rosen (1998), coeditor with Leiter of the Oxford Handbook of Continental Philosophy (2007), describes the tradition explicitly in terms of methodology. In the first few pages of Rosen 1998, he highlights four of what he calls ‘recurrent issues’ that define the field, each of which has a core methodological component: (1) the method of philosophy; (2) the limits of science and reason; (3) the influence of historical change on philosophy; and (4) the unity of theory and practice. A quote from Leiter and Rosen’s Introduction to their hand- book states the point clearly: “Where most of the Continental traditions differ is in their attitude towards science and scientific methods. While forms of philosophical naturalism have been dominant in Anglophone [Analytic] philosophy, the vast majority of authors within the Continental traditions insist on the distinctiveness of philosophical methods and their priority to those of natural sciences’ (2007, 4). This is in contrast to Analytic philosophy, which often sees its methods as consistent with, and on the same level as, those of the natural sciences.” (p. 665-666, n. 13)

Conclusion 8: Appearance of shoddiness is factually incorrect. The lack of a page number does not equal lack of evidence. The evidence is there and more to boot. Further, Leiter uses the rhetorical device of inflammatory language to divert attention away from the actual argument being made. First, he says that I “endorse a myth” in reference to another group of people. I make no reference to such a group, and Leiter provides no evidence of any link between what I claim and what they claim. Second, he claims that my citation is a “naked fabrication,” which while being factually incorrect also inflames the reader to indignation and away from the actual arguments made in the article.


Appearance 9: “Bruya repeatedly misrepresents Kieran Healy’s research about the PGR; readers interested in Prof. Healy’s views can start here [link provided].”

Reality 9: ” Kieran Healy (2012a) did an analysis of the overall ranking of programs by breaking the evaluators into categories according to specialty. He found wide variation from one specialty to another in their rankings for most of the programs. See his third and fourth figures.” (p. 673, n. 20)

” Healy (2012b) presents an instructive way to visualize this for the 2006 PGR, categorizing the various specialties and areas into twelve what he calls “specialty areas.” (p. 675, n. 22)

“This can be seen clearly in Healy’s (2012b) visualization for the 2006 PGR mentioned in the previous footnote. Each program is represented by a variable-size pie chart, with each wedge representing a category of philosophy (groupings of the thirty-three PGR specialties). Five wedges represent M&E specialties, five history, and two value. Scanning the programs from top to bottom in the figure, at least four out of the five M&E wedges for the top programs are near the maximum size, until one gets to #10 (not counting numerical ties), Harvard. The most revealing is Australian National University (ANU; the PGR has an international ranking as well as a national ranking), which ranks above Harvard, and has sizable wedges in the five M&E categories, sizable wedges in ethics and political philosophy, and no visible wedges at all in the five history categories—proof that one can do well in the rankings relying on M&E and absent history. Georgetown is nearly a mirror image of ANU, with particular strengths in four of the five history categories, along with ethics and political philosophy, but weak in all five M&E categories. Georgetown winds up much farther down the list—# 57 (again, not counting ties)—evidence that one cannot do well in the rankings without strengths in M&E, and evidence that strengths in history guarantee nothing. Healy comments, “MIT and ANU had the narrowest range, relatively speaking, but their strength was concentrated in the areas that are strongly associated with overall reputation—in par- ticular, Metaphysics, Epistemology, Language, and Philosophy of Mind.” (p. 678, n. 23)

Conclusion 9: Appearance of shoddiness is unsubstantiated. Leiter makes the accusation that I repeatedly misrepresent Healy’s research without substantiating his claim. Referring the reader to another webpage which also does not address the issue does not amount to substantiation of such an inflammatory claim. Again, this kind of rhetorical misdirection diverts attention away from the arguments made in the article, attempting to also reduce the reader’s opinion of the original author by suggesting that the reader need not even continue pursuing the subject.


Appearance 10: “Bruya’s main methodological suggestion (681 ff.) is to aggregate scores in the specialty areas for overall rankings. The Advisory Board discussed this in past years. Since there is no way to assign weights to the specialty areas that would not be hugely controversial and indefensible, the PGR has never adopted such an approach. Bruya has no real solution to the problem (though one may rest assured Chinese Philosophy will count for more!).”

Reality 10: “There are any number of ways that the specialty rankings could be aggregated, but the obvious and simplest way would be to simply sum them for each program.” (p. 681)

“It has been demonstrated above that the PGR uses the dubious method of asking experts in narrow fields to evaluate the overall quality of programs. As I mentioned, there is a valid way to aggregate such information, which is to take the specialty scores that are done individually for each program by small panels of experts within specific specialties and then simply add them up. The program that scores highest for the sum of all ranked specialties gets the highest overall score.18 I undertook such a mathematical aggregation, taking all the specialty scores for each program, as provided by the PGR, summing them for each program, and then ranking the programs accord- ingly. The difference between the overall ranking and the mathematically aggregated ranking is quite large, with the average change in rank being four spots (Table 2).” (p. 671)

“If one were to object and say, well, the mathematical aggregation is so crude, it doesn’t account for the size of departments, for focused strengths, and so on. Well, neither does the overall ranking, which has no modalities at all and is just a black box that spits out a number with no rhyme or reason.” (673)

Conclusion 10: Appearance of shoddiness is factually incorrect. Again, Leiter ignores what is plainly in my argument: simply sum the specialty scores, no weighting necessary. There is more that can be said about this process, and I address more subtleties in my argument. In other words, I do provide a real solution. It’s just that Leiter pretends it isn’t there and attempts to divert attention away from it. Further, in saying that this is my “main methodological” suggestion, Leiter implies that this is the only one that really matters. This one matters and so do the other four, all of which Leiter ignores.


Appearance 11: “I predict, with confidence, that no changes to the PGR methodology are likely to result from this very confused critique. I also trust the journal Metaphilosophy will withdraw the article in its entirety given the fabrications, and subject a revised article to a more serious peer-review process, to insure the final version is not so obviously shoddy.”

Reality 11: [No argument is given, so no evidence from article can be offered in response, except for the article in its entirety.]

Conclusion 11: Appearance of shoddiness unsubstantiated. This is another inflammatory claim for rhetorical effect. The claim that the article is “confused” suggests that it offers nothing of worth and is not worth wasting even a moment reading. The feigned confidence gives the reader the impression that Leiter is an authority on the matter and so need not be questioned.


Overall conclusion. The appearance of shoddiness of my critique of the PGR is entirely unsubstantiated and uses the rhetorical device of inflammatory language to divert attention away from the arguments made in the article. Not one single argument receives serious attention.

(image: detail of Frank Stella, “Nunca Pasa Nada”)





UPDATE (12/17/15): David Wallace (Oxford) presents a critique of Bruya’s arguments here.

Your email address will not be published. Required fields are marked *

Please enter an e-mail address