Reconsidering Implicit Bias


At the time of this post, bibliographic philosophy database PhilPapers has 1,975,719 entries. Of these, only 74 works seem to be about “implicit bias”—subconscious bias concerning, for example, race, ethnicity, gender, disability, or sexuality. One might think, then, that the idea of implicit bias hasn’t been of much importance in philosophy. Yet, while there is not a lot of philosophical research on or making use of implicit bias, the idea has been professionally significant, playing a role in the discipline’s self-examination about its racial and gender disparities (see here, here, here, here, herehere, and here for some previous posts on these disparities). Beyond the world of philosophy, training sessions in academia and the business world regularly make use of implicit bias to address matters of workplace diversity.

You can take the implicit association test (IAT) here, adding your results to those of the over 17 million other people who’ve taken it, and learn your implicit bias score, which is supposed to be correlated to one’s propensity for engaging in discriminatory behaviors. The creators of the IAT, Mahzarin Banaji and  Anthony Greenwald, have written:

[T]he automatic White preference expressed on the Race IAT is now established as signaling discriminatory behavior. It predicts discriminatory behavior even among research participants who earnestly (and, we believe, honestly) espouse egalitarian beliefs. That last statement may sound like a self-contradiction, but it’s an empirical truth. Among research participants who describe themselves as racially egalitarian, the Race IAT has been shown, reliably and repeatedly, to predict discriminatory behavior that was observed in the research.

But what does the IAT really tell us? In an in-depth examination of implicit bias in New York Magazine (from which the above quote is pulled) author Jesse Singal writes:

it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way…

Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior—even the test’s creators have now admitted as such.

Here are just a few of the problems with the IAT and assessing implicit bias that Singal reports:

  • some of the early papers that did claim to find a link between IAT scores and discriminatory behavior had backbreaking problems that wouldn’t be discovered until much later on.
  • there doesn’t appear to be any published evidence that the race IAT has test-retest reliability that is close to acceptable for real-world evaluation. If you take the test today, and then take it again tomorrow—or even in just a few hours—there’s a solid chance you’ll get a very different result.
  • when you use meta-analyses to examine the question of whether IAT scores predict discriminatory behavior accurately enough for the test to be useful in real-world settings, the answer is: No…  One important upcoming meta-analysis… found that such scores can explain less than one percent of the variance observed in discriminatory behavior.
  • the statistical evidence is simply too lacking for the test to be used to predict individual behavior.
  • [the IAT’s advocates] don’t appear to have fully explored alternate explanations for what the IAT measures… high IAT scores may sometimes be artifacts of empathy for an out-group, and/or familiarity with negative stereotypes against that group, rather than indicating any sort of deep-seated unconscious endorsement of those associations.
  • the test’s scoring convention assumes that a score of zero represents behavioral neutrality—that someone with a score at or near zero will treat members of the in-group and out-group the same. But Blanton and his colleagues found that in those studies in which the IAT does predict discriminatory behavior, there’s a “right bias” in which a score of zero actually corresponds to bias against the in-group. This offers even more evidence that there is something wrong with the entire basic scoring scheme.
  • the IAT team adopted completely arbitrary guidelines regarding who is labeled by Project Implicit as having “slight,” “moderate,” or “strong” implicit preferences. These categories were never tethered to any real-world outcomes, and sometime around when the IAT’s architects changed the algorithm, they also changed the cutoffs, never fully publishing their reasoning.

There is quite a bit more in Singal’s article, including some thoughts on how a test with apparently so many flaws became so popular a tool, and accounts of one IAT creator’s defensive responses to critics.

As he says, “race is a really, really complicated subject.”

(Thanks to Robert Long for bringing the New York Magazine article to my attention. See also this piece in The Chronicle of Higher Education.)

Kip Omolade, “Diovadiova Chrome Karyn III and Kitty Kash II”

There are 43 comments

Your email address will not be published. Required fields are marked *

  
Please enter an e-mail address