The following guest post* is by Marcus Arvan (Tampa). Marcus runs The Philosophers’ Cocoon, a helpful blog aimed at early-career philosophers. Last week saw the posting of a report on philosophers’ citation practices by Kieran Healy. Marcus has written on this topic a few times over the years at The Philosophers’ Cocoon (the latest is here), and so I asked him if he would share his take on these issues. As you’ll see, he thinks the problem is more troubling than one might first imagine. Discussion is welcome, as usual.
There has been a great deal of discussion on social media the past couple of days about Kieran Healy’s report on citation data culled from the 2,100 articles published in four of the most highly ranked philosophy journals (Nous, PPR, Philosophical Review, and Mind) from 1993-2013. In brief, Healy reports that although there were no statistically significant gender differences between how often the typical article by men and women in these journals is cited, the top-1% cited articles by men were cited far more often than the top 1% by women. Finally, Healy suggest this indicates there are significant gender differences in who—men or women—are taken to be “agenda setters” in philosophy (hint: it’s all men).
Most of the discussion I have come across on social media thus far has focused on issues of gender bias. I want to suggest there is evidence, however, that the real problem may be much deeper than this. In a number of posts at The Philosophers’ Cocoon the past several years, I have asked readers about their general reading and citation habits. A couple of trends emerged. A number of people said—and indeed endorsed—the following practices:
- Only reading a handful of top-ranked journals (viz. “why should I read bad journals?”)
- Only citing articles they bothered to read and draw influence from.
Both of these practices/norms appear to be quite idiosyncratic to philosophy. I know that in some other fields, people generally expect themselves and others to read and cite all recent work in the areas they publish in. Reading and citations, in other words, are not considered in these fields to “honorific”: one does not merely read and cite journals or work one considers “good.” Rather, one is expected to read and cite everything recent as a matter of basic, sound scholarship.
With this in mind, let us return to Healy’s data. In addition to the aforementioned gender differences, Healy reports several other facts of interest:
- Only 12.5% of articles published in Nous, PPR, Mind, and Philosophical Review from 1993-2013 were by women.
- Almost 1/5 of articles cited in the above journals have never been cited at all.
- A little more than 1/2 of all articles in the above journals have been cited fewer than 5 times.
- A very small proportion have been cited over 25 times.
Now let’s think about the math here. Suppose all you did was read these four top journals and cite articles in them in proportion to these citation practices. Next, suppose we define articles cited more than 25 times as “agenda setting” (which actually seems too weak. True agenda-setting work is presumably cited far more than that). Finally, suppose that only, say, 10% of articles in the data set are cited more than 25 times (in line with Healy’s statement that “Getting cited just twenty five times is enough to put a paper in the top decile of the distribution”). This means–prior to implicit or explicit biases having any opportunity to influence readers’ citation patterns—the probability of an article written by a women in one of these journals being recognized as “agenda-setting” (in terms of citation counts) is .125 x .10 = .0125, or approximately 1%.
This suggests to me, again, that the problem goes much deeper than mere bias. The problem is that philosophers’ reading and citations habits more generally are problematic. If one only reads top-ranked journals and you only cite papers you think are “agenda-setting”, then—prior to any gender bias on your part in selecting citations—you will tend to cite women as “agenda-setters” approximately 1 time in 100. As such, if we want to make citation counts more equitable, correcting for implicit and explicit biases alone won’t suffice. We must fundamentally change norms in philosophy about what to read and what to cite. One should not merely read authors or journals one takes to be “good” or “agenda-setting.” Our reading and citation practices should not function as honorifics (to recognize “good work”), for—as we see above—these practices alone suffice to mathematically/probabilistically exclude people from being cited (disproportionately, women) prior to implicit or explicit bias even entering the citation picture. Not only that, if anything enables implicit bias to make things worse (after calculating the 1% number above), surely—I want to say—it is the very same practices and norms. If, as a discipline, we are (A) in the habit of only reading and citing articles we take to be agenda-setting and (B) a significant number of people are in turn biased to treat men as “agenda-setting” but not women, then (C) the number of women who are recognized as agenda-setting is likely to be lower still than the pre-bias 1% figure (.o1%?).
As such, I want to suggest that if we really want to fix the problems Healy’s report describes, we need to replace the norm of only reading and citing journals and articles people consider “good” with an alternative norm utilized in many other fields: the norm of citing everything recent on your topic, good journal or no, good article or no. This alternative norm, as I see it, exists in these other fields for a reason. It exists so that the kinds of problems that Healy’s data illustrate do not arise. If you have to read and cite everything, it becomes much more difficult to systematically exclude people from citation networks, and indeed, from “agenda-setting.”
(image: “Paper Owl” by Irving Harper)