Helpful Remarks Regarding Implicit Bias
Some common criticisms of implicit bias are mistaken, argue John Doris (Washington Univ., St. Louis), Laura Niemi (Duke), and Keith Payne (UNC Chapel Hill) in a recent column at Scientific American.
Increased awareness and study of implicit bias has been accompanied by increased skepticism about it, owing to questions raised about the Implicit Association Test (IAT), an instrument often used to measure implicit bias. This skepticism surfaces sometimes in comments in the philosophical blogosphere, where certain efforts aimed at increasing diversity in philosophy, or projects aimed at studying bias, are criticized for relying on an allegedly”discredited” idea.
Payne, Niemi, and Doris first note that problems with the IAT don’t show that implicit bias doesn’t exist:
The IAT is a measure, and it doesn’t follow from a particular measure being flawed that the phenomenon we’re attempting to measure is not real. Drawing that conclusion is to commit the Divining Rod Fallacy: just because a rod doesn’t find water doesn’t mean there’s no such thing as water.
They say that issues with the IAT should lead us to ask about what other evidence there is for implicit bias. Apparently, “there is lots of other evidence”:
There are perceptual illusions, for example, in which white subjects perceive black faces as angrier than white faces with the same expression. Race can bias people to see harmless objects as weapons when they are in the hands of black men, and to dislike abstract images that are paired with black faces. And there are dozens of variants of laboratory tasks finding that most participants are faster to identify bad words paired with black faces than white faces. None of these measures is without limitations, but they show the same pattern of reliable bias as the IAT. There is a mountain of evidence—independent of any single test—that implicit bias is real.
Second, the authors warn against expecting too much predictive power from the instruments used to measure implicit bias:
It is frequently complained that an individual’s IAT score doesn’t tell you whether they will discriminate on a particular occasion. This is to commit the Palm Reading Fallacy: unlike palm readers, research psychologists aren’t usually in the business of telling you, as an individual, what your life holds in store. Most measures in psychology, from aptitude tests to personality scales, are useful for predicting how groups will respond on average, not forecasting how particular individuals will behave…
What the IAT does, and does well, is predict average outcomes across larger entities like counties, cities, or states. For example, metro areas with greater average implicit bias have larger racial disparities in police shootings. And counties with greater average implicit bias have larger racial disparities in infant health problems. These correlations are important: the lives of black citizens and newborn black babies depend on them.
The authors note that there is an abundance of evidence for persistent “widespread pattern of discrimination and disparities,” despite widespread disavowals of racism. This “bears a much closer resemblance to the widespread stereotypical thoughts seen on implicit tests than to the survey studies in which most people present themselves as unbiased.”
The column is here.
until psychologists find ways to track the contextualized and continuous behaviors of individuals outside of their labs any claims to predictive powers are speculative at best, a problem not solved by waxing sociological…Report
As for the correlations between the IAT test and racial disparities, it seems like there might be some reverse causation at work. Thqt is, places with greater health and lethal force disparities might develop greater biases as measured by the IAT.
This is another reason why it is important to find individual-level correlations between measures of implicit bias and specific discriminatory acts. Reverse causation seems less plausible here.Report
A nice article, overall. Racial bias clearly exists; hidden racial bias very likely exists. Regardless of the reliability of implicit bias tests, nobody can deny this. As the authors point out, there’s too much evidence that shows this.
But in places the reasoning is odd.
“Most measures in psychology, from aptitude tests to personality scales, are useful for predicting how groups will respond on average, not forecasting how particular individuals will behave. . . Knowing that an employee scored high on conscientiousness won’t tell you much about whether her work will be careful or sloppy if you inspect it right now. But if a large company hires hundreds of employees who are all conscientious, this will likely pay off with a small but consistent increase in careful work on average.”
A good test for conscientiousness ought to tell you about the probability that somebody will perform conscientiously. True, this will not tell you whether this person is going to be careful today or tomorrow, but if the test is any good, and if somebody gets a high score, you should have greater confidence that she will be careful. Probabilities work over aggregates; that’s why the company who hires hundreds will do better hiring people who rate highly on the test. Equally, an individual employee’s greater care should reveal itself over a period of time, when she has had multiple tasks to perform. The same should be true of a good test of implicit bias.Report
I think this takes the point about implicit bias exactly the wrong way around. The idea should not be to screen prospective employees for implicit bias and exclude those who don’t pass some threshold. The idea rather should be to accept that we who are doing the hiring have implicit biases of our own, and to structure our hiring process in ways that minimize the effects of such bias, or otherwise help us to work around them. In other words, the notion of implicit bias (and the IAT itself) should have a heuristic and corrective function, rather than a forensic or stigmatizing or exclusionary function.Report
I agree with this. I wasn’t thinking about an implicit bias test as a screening or forensic device, and I totally agree that it would be terrible to use it this way. My point was more theoretical, namely that its utility in large samples rests on its probabilistic application to individuals. We shouldn’t take it as a primitive group measure, as it seems the authors do.Report
There are two views of implicit bias here. One is an individual difference view, which is the most common interpretation. On this view, as Mohan says, there is expected to be a probabilistic relationship between the test score and individual behavior. What some of the journalistic critiques have missed is that for a correlation in the .20-.30 range (which is most measures in psychology) you need to average over many (e.g. hundreds) of observations to detect effects. So you can reject a null hypothesis of no relationship between the measure and behavior, but if you were to look at an individual’s test score, you would not be able to intuitively “predict” the individual’s behavior well in single cases.
Another view (my view, recently forwarded in a Psychological Inquiry paper) is that we should think of implicit bias not as an individual difference at all, but only the net effect of all stereotypes activated in an environment at the moment. That is why there are large correlations at the county or state level even though implicit bias tests have very low test-retest stability.Report
I have recently started co-facilitating workshops at my institution on implicit bias in hiring, promotion and tenure, after several months of training. Part of that training included an extensive introduction to the extensive literature on empirical studies of implicit bias.
In response to the column and to the conversation here, three comments:
1. Some of the most compelling research on implicit bias in matters related to employment comes not from the IAT, but from a range of other kinds of studies. In one subset, identical resumes/CVs receive significantly different responses based only on the name at the top (male/female, stereotypically European/stereotypically African-American, etc.). In another subset, analysis of word use reveals differences in how faculty describe male or female candidates in letters of recommendation. Then there is a fascinating longitudinal study of the effect of blind auditions on the composition of major symphony orchestras in the United States. I could go on.
2. What’s missing in a lot of the discussion around implicit bias is a clear sense of what makes it implicit: it involves a cognitive process of implicit association that happens prior to conscious awareness. That our brains make associations is just part of how we survive and get by in the world; the content of those associations is learned or acquired from the culture in which we develop. They can be powerful, but they are not fixed and permanent and inevitable; we tend to fall back on them most readily when we are rushed or tired or distracted. When we are paying attention, we can work around them, even – slowly – remake them. This may help to explain why the detection of implicit bias is not predictive of bias in conduct . . . because prediction (and, let’s be honest, blame) is not the point.
3. I wonder if philosophers are especially uncomfortable with implicit bias because our discipline brings with it a kind of consciousness-bias, an assumption that all cognition and all conduct must ultimately proceed from conscious awareness or conscious intention. The notion that something as morally repugnant as prejudice could proceed from an unconscious cognitive process does not sit well with inheritors of a tradition informed by the likes of Aristotle, Descartes and Kant.Report
Regarding point 1, I heard somewhere that the study where “identical resumes/CVs receive significantly different responses based only on the name at the top” was done using faculty from a Business school at a university. The worry is that Business schools are notorious for having more conservative faculty and are not representative of universities as a whole, so the study can be questioned. This was just something I heard, though.Report
There are many variants on the study. One involved responses to job postings in newspapers in major cities; another, applications for lab-manager positions in science and social-science departments. More directly responding to the implication of your point about “conservative” faculty, implicit bias does not vary all that much across categories. In versions of the study in which the only variable was a male name or a female name at the top of the CV, the responses of male and female reviewers were similar.
Implicit bias can be insidious, that way.
(I seriously recommend everyone try several variants of the IAT, simply for its heuristic value. When asked to sort items against the grain of my own implicit associations, I found myself laughing at how manifestly difficult the task was!)Report
All three points are very helpful. Regarding point three, I think there is a another factor likely to exacerbate denial and neglect of unconscious bias in philosophy: even if philosophers accept that there is such a phenomenon, they will see themselves immune from it, being as they are in the business of uncovering the hidden assumptions and biases behind out judgments. Needless to say, this can make teaching and professional practices in philosophy worse.Report
Thanks for this. Points 1 and 2 are very helpful. On point 3, though, it’s debatable whether ANY philosophers are committed to every action being explicitly reasoned. Following Socrates (on weakness of will), some (including, famously, Donald Davidson) have said that there is something strange about doing something that one explicitly knows to be wrong. But even Plato allowed (and Aristotle was explicit about this) that you might do something without any explicit reasoning . . . for example, out of emotion or “spiritedness.”Report
I was puzzling over that point in Aristotle, especially in light of his insistence that, spiritedness notwithstanding, you are still fully responsible for your own character. In the present context, that would imply that any implicit biases you may harbor are somehow your own fault (that is, blameworthy).
I fully admit that Point 3 is underdeveloped. It’s something that occurred to me while working with a group in one of my classes (philosophy for graduate students in public policy). Their project is on the opioid crisis, and I noted the apparent paucity of philosophical literature on addiction; some of the literature that does exist on the topic seems simply to ignore the possibility that addiction has a biochemical component, casting it instead as a matter of “willpower” or character of moral fiber or some such.
I also recall some recent work on the phenomenon of disgust that explicitly denied any connection of disgust with a physiological response to chemicals associated with spoilage and rot, opting instead to interpret it as a kind of aesthetic judgment.
It just seems to me that implicit association, as framed within cognitive psychology, has a similar kind of unconscious or pre-conscious character, and that philosophers’ responses might be conditioned by a similar aversion. I suspect it has some relation to the controversy over “moral luck” and “impure agency”, but have yet to pursue that lead.Report
Aristotle thought that one’s character is formed by education, which includes a component of habituation. One is responsible for one’s character, but not because it’s formed by reasoning or will power, etc. It seems to me that his model could be applied to implicit bias.Report
My experience is that philosophers in the broadly “analytic” tradition are, as you say, “especially uncomfortable” with empirical work on unconscious processing, especially where that work intimates bias or irrationality, even if it is true, as Mohan Matthen says, that many philosophers are not committed to every (full-blooded?) action being “explicitly reasoned.”
I document this assessment in my book, Talking to Our Selves (Oxford 2015); Arpaly and Kornblith are two philosophers defending similar assessments.Report
The question of responsibility for implicit bias is a difficult one.
While I don”t find thinking in terms of character especially congenial, I agree that indirect approaches are promising: one might be responsible for the dispositions eventuating in implicit bias, even if one is not responsible for particular instances — say by virtue of exculpating ignorance.
For a good place to start on ignorance and implicit bias, try Washington & Kelly:
Hermanson has some critical observations about that Washington & Kelly piece in the final section of “Implicit Bias, Stereotype Threat, and Political Correctness in Philosophy:
Some reactions from elsewhere
IB research fails to distinguish between the two poles of reasons for IB differences: 1) biased reporting of a neutral reality; and 2) neutral reporting of a biased reality. Only the first one is actually “bias”, but writers often conflate them.
For example, there are a high proportion of black men who have been in prison. This is not because they are black; melanin does not affect behavior. Rather, it is largely from the second-order effects of being black: racism, police targeting, exposure to gangs, and other social evils that are imposed by others.
Applying test #1: In a utopian sense, prison membership has nothing to do with being black (melanin does not affect behavior). Therefore, with test #1, you assume a neutral reality; people who associate blackness with danger or prison membership are biased and quite likely racists.
Applying test #2: In a real-world sense, prison membership has quite a bit to do with being black. Melanin does not affect behavior… but it does affect social outcomes, due to racism. Due to those outcomes, black people are therefore more likely to have been in prison. Applying test #2, you assumea biased reality; people who associate blackness with danger or prison membership are accurately reporting reality. They’re accurate–not biased.Report
More reactions from elsewhere:
Exceptions noted, the negative responses we’ve gotten have tended to be from philosophers, with psychologists being much more sympathetic.
Interesting question why this should be so.Report