Reconsidering Implicit Bias
At the time of this post, bibliographic philosophy database PhilPapers has 1,975,719 entries. Of these, only 74 works seem to be about “implicit bias”—subconscious bias concerning, for example, race, ethnicity, gender, disability, or sexuality. One might think, then, that the idea of implicit bias hasn’t been of much importance in philosophy. Yet, while there is not a lot of philosophical research on or making use of implicit bias, the idea has been professionally significant, playing a role in the discipline’s self-examination about its racial and gender disparities (see here, here, here, here, here, here, and here for some previous posts on these disparities). Beyond the world of philosophy, training sessions in academia and the business world regularly make use of implicit bias to address matters of workplace diversity.
You can take the implicit association test (IAT) here, adding your results to those of the over 17 million other people who’ve taken it, and learn your implicit bias score, which is supposed to be correlated to one’s propensity for engaging in discriminatory behaviors. The creators of the IAT, Mahzarin Banaji and Anthony Greenwald, have written:
[T]he automatic White preference expressed on the Race IAT is now established as signaling discriminatory behavior. It predicts discriminatory behavior even among research participants who earnestly (and, we believe, honestly) espouse egalitarian beliefs. That last statement may sound like a self-contradiction, but it’s an empirical truth. Among research participants who describe themselves as racially egalitarian, the Race IAT has been shown, reliably and repeatedly, to predict discriminatory behavior that was observed in the research.
But what does the IAT really tell us? In an in-depth examination of implicit bias in New York Magazine (from which the above quote is pulled) author Jesse Singal writes:
it might feel safe to assume that the IAT really does measure people’s propensity to commit real-world acts of implicit bias against marginalized groups, and that it does so in a dependable, clearly understood way…
Unfortunately, none of that is true. A pile of scholarly work, some of it published in top psychology journals and most of it ignored by the media, suggests that the IAT falls far short of the quality-control standards normally expected of psychological instruments. The IAT, this research suggests, is a noisy, unreliable measure that correlates far too weakly with any real-world outcomes to be used to predict individuals’ behavior—even the test’s creators have now admitted as such.
Here are just a few of the problems with the IAT and assessing implicit bias that Singal reports:
- some of the early papers that did claim to find a link between IAT scores and discriminatory behavior had backbreaking problems that wouldn’t be discovered until much later on.
- there doesn’t appear to be any published evidence that the race IAT has test-retest reliability that is close to acceptable for real-world evaluation. If you take the test today, and then take it again tomorrow—or even in just a few hours—there’s a solid chance you’ll get a very different result.
- when you use meta-analyses to examine the question of whether IAT scores predict discriminatory behavior accurately enough for the test to be useful in real-world settings, the answer is: No… One important upcoming meta-analysis… found that such scores can explain less than one percent of the variance observed in discriminatory behavior.
- the statistical evidence is simply too lacking for the test to be used to predict individual behavior.
- [the IAT’s advocates] don’t appear to have fully explored alternate explanations for what the IAT measures… high IAT scores may sometimes be artifacts of empathy for an out-group, and/or familiarity with negative stereotypes against that group, rather than indicating any sort of deep-seated unconscious endorsement of those associations.
- the test’s scoring convention assumes that a score of zero represents behavioral neutrality—that someone with a score at or near zero will treat members of the in-group and out-group the same. But Blanton and his colleagues found that in those studies in which the IAT does predict discriminatory behavior, there’s a “right bias” in which a score of zero actually corresponds to bias against the in-group. This offers even more evidence that there is something wrong with the entire basic scoring scheme.
- the IAT team adopted completely arbitrary guidelines regarding who is labeled by Project Implicit as having “slight,” “moderate,” or “strong” implicit preferences. These categories were never tethered to any real-world outcomes, and sometime around when the IAT’s architects changed the algorithm, they also changed the cutoffs, never fully publishing their reasoning.
There is quite a bit more in Singal’s article, including some thoughts on how a test with apparently so many flaws became so popular a tool, and accounts of one IAT creator’s defensive responses to critics.
As he says, “race is a really, really complicated subject.”
(Thanks to Robert Long for bringing the New York Magazine article to my attention. See also this piece in The Chronicle of Higher Education.)
Thank you! I have been telling people about these problems for years to little avail, and I infer there’s been some others who have done the same.
I think this would be a good time for many people who like to talk about the causes and solutions to societies ills to take this as a chance to remember:
Confirmation bias still alive and well even in philosophers who care lots about social justice
Being good at philosophy doesn’t always make you suddenly great at assessing or interpreting empirical evidence.
Sometimes when someone is arguing against certain parts of your theory on the causes of oppression, they’re not committing epistemic injustice.Report
Amen. As a philosopher, I think the really embarrassing thing is that philosophers are almost always quick to criticize and reject IQ tests, and yet so many of them they fell for the implicit bias fad. In fact, IQ tests are much more psychometrically reliable and valid (flawed though they are) than the IAT, and this has been known for many years.Report
How much does the use that ‘implicit bias’ as a theoretical construct has been put to in explaining gender and racial disparities in philosophy depend on this one test alone? (Genuine, not rhetorical question).
On the one hand, this is a cautionary tale about how we should be very cautious with taking the latest research from a soft science like psychology as gospel, just because it flatters our political preconceptions. (Assuming that there were in fact people who did that with the IAT). On the other hand, if many other studies standardly support the claims made about the scope and nature of implicit bias by people attempting to explain the disparities, it would also be bad if that work got tarred with guilt by association because what people take from this story about the IAT is ‘oh, all that implicit bias stuff has been debunked’. Though we should certainly look carefully and critically at any other research that people are relying on here.Report
Thanks for this. There are other tools for analysis of implicit attitudes aside from the IAT. I hope folks don’t wnd up endorsing the conditional “if the IAT is theoretically shaky, then all measures of implicit attitudes are theoretically shaky.”Report
Yes, it often goes misunderstood that the IAT is an imperfect *measure* of implicit bias, and certainly not the only measure.
The massive literature on behavioral discrimination suggests some bias, presumably of a disavowed kind, since explicit prejudice has decreased over the past several decades (though I wouldn’t be surprised to see a bit of an uptick in explicit bias in the US in the last 5 years or so, at least among some groups…). It is the *combo* of discrimination in lab settings (e.g., equivalent resumes judged differently by different groups because of race or gender) *and* avowed anti-biased attitudes is excellent evidence of the presence of of implicit bias.
Another, especially promising measure of implicit bias is the Affect Misattribution Paradigm, see here: http://onlinelibrary.wiley.com/doi/10.1111/spc3.12148/abstract
This isn’t to suggest the IAT is worthless. It may well predict *group* biases, even if it is poor at predicting *individual* biases. Also, there are other meta-analyses that show a modest relationship between IAT and individual behavior, so I’m not sure how to jive those with the results Singal relies on. There are mountains of IAT data, including other meta-analyses, so I’d urge caution in reaching final conclusions. The truth is likely that the verdict is still out on IAT (but again, other measures of implicit bias are independent of IAT).Report
Could you point us towards some of those other tools?Report
I take it resume studies would be one such, and that they are amenable to many if not all the practical uses to which the IAT is put?Report
Jonathan, I posted above before reading your comment. but as it turns out, my comment might address your question on Charles Lassiter’s behalf:
1. yes, resume studies are good evidence of implicit bias. (as are loads of other behavioral studies, e.g. studies of helping behavior)
2. Studies using the Affect Misattribution Procedure are also great evidence of bias and may speak more directly to the question of the nature of bias.Report
I so agree; and I do think I see a bit of evidence that people are making that sort of fallacious inference. Also, I’d recommend this article: Jost et al, “The existence of implicit bias is beyond reasonable doubt”, Research in Organizational Behavior 29 (2009) 39–69.Report
Since it may be of interest I thought I’d post a link to a series of posts that were published at the Brains blog by some of the contributors to Michael Brownstein and Jenny Saul’s two-volume collection /Implicit Bias and Philosophy/. Readers can judge for themselves whether those authors make the mistakes that Singal is diagnosing here: http://philosophyofbrains.com/category/books/brownstein-and-saul-implicit-bias-and-philosophyReport
(Of particular interest will be this post by Edouard Machery, which addresses directly the low predictive validity of the IAT and other such measures, and proposes a way to understand it: http://philosophyofbrains.com/2016/04/14/what-is-an-attitude.aspx. For those with access to Oxford Scholarship Online, his full chapter is available here: http://www.oxfordscholarship.com/view/10.1093/acprof:oso/9780198713241.001.0001/acprof-9780198713241-chapter-5.)Report
Thanks for those useful links. After looking over the main claims of the articles (caveat: NOT having read all of them) I would say yes they do have the problems Singal worries about, assuming he has identified real problems. The issue is that the IAT is supposed to measure racial bias, whereas the critique claims that it measures nothing at all (at least nothing that is morally or politically interesting). The entire collection of essays seems predicated on the assumption that the IAT does in fact measure racial bias.Report
Heath, I encourage you to take another look at the articles. It’s not true that all the chapters are substantially premised on the assumption that the IAT measures racial bias. I’ll just speak for my chapter: though it does briefly report some IAT research, the arguments only assume the common phenomenon of discovering that you had an attitude that you did not know you had, and that it may have influenced your behavior. There are a host of interesting philosophical issues surrounding unconscious or uncontrollable beliefs, motivations, desires, and other attitudes, and these issues are independent of the IAT.Report
Singal’s article is impressive and it is good that the IAT is getting some serious scrutiny. But if the pro-IAT hype was excessive, it strikes me that so too is the current anti-IAT frenzy. I think the jury is still very much out about the IAT. Three quick points about Singal’s article specifically (drawn from my Facebook post about this):
– In places it looks like Singal is saying: “Banaji and Greenwald repeatedly said the IAT can walk on water. The IAT can’t walk on water. Therefore, the IAT is a failure.” Bad argument.
-Singal says an instrument needs a reliability of .8 or it is toast. Simply. Plain. Wrong. TONS of instruments in widespread use in psychology (Conners CPT, Tower Hanoi) have lower reliabilities.
-Using “percent variance explained” as the sole metric of predictive validity can be deeply misleading. It is even worse when you measure predictive validity in a *single* interaction, when the construct at issue, racial bias, is presumed to operate over a multitude of interactions. These points were made decades ago by, among many others, Jacob Cohen (see Statistical Power Analysis for the Behavioral Sciences pg. 90). Robert Abelson offers a vivid illustration of the problem using baseball batting averages (“A Variance Explanation Paradox: When a Little is a Lot”). It would be well worth our time to relearn these lessons.Report
I don’t think you’re portraying Singal’s arguments fairly. The article isn’t about “walking on water” -type claims. His point is simply that the evidence doesn’t support the notion that the IAT reveals hidden discriminatory attitudes, which is important because the belief that the IAT does reveal hidden biases is the only reason why it has become famous and widely used.
Singal doesn’t say that a test is “toast” if it doesn’t have a reliability of .8. Instead he says that the adequate level of reliability depends on context and that most researchers are comfortable with a reliability of .8. He also notes that high reliability is important in tests “designed to provide important information from someone based on a single test-taking session”, which is how the IAT is often used. In any case, it’s bad that there’s little published information on the reliability of the IAT. What evidence there is suggests that the IAT’s reliability may be as low as .4, which is awful. For an illustration, if you had an IQ test with mean=100, SD=15, reliability=.4, the 95% confidence interval for your score in that test would be ±19 points, meaning that the same individual could quite plausibly be classified by the test as either mentally disabled or smarter than the average.
Until it has been established what, if anything, the IAT measures, it’s quite premature to speculate how small effects of implicit bias might accumulate into larger effects over many situations. One thing to consider that Singal briefly discussed is that “in those studies in which the IAT does predict discriminatory behavior, there’s a ‘right bias’ in which a score of zero actually corresponds to bias against the in-group.” This suggests that the reported IAT-discrimination associations, small as they are, may still be inflated.Report
Mazirian, You make many good points, but let me push back. Why in the world are we treating the IAT as a diagnostic instrument, like an IQ test? It is not. It is a dependent measure in experimental research, nothing more and nothing less.
The IAT is actually quite similar in many ways to classic interference tasks such as the Stroop, Simon, Flanker etc. No one thinks that the magnitude of person’s Stroop interference effect should predict with a high degree of accuracy their real world behaviors (e.g., failure to suppress an automatic urge to eat cookies), let alone predict what they will do in a *single* observational episode. And the Stroop interference effect has quite poor test/retest reliability, way less than Singal’s .8 cutoff, which is why large n’s are often needed. It is also notable that after six decades, we aren’t totally sure what the hell the Stroop task actually measures–whatever it is, it’s surely messy. Yet the Stroop and other interference tasks are used widely, and we can learn (and have learned) much from them, often by combining them with other measures to triangulate on the phenomenon of interest.
Why hold the IAT to a standard higher than this? Or tell me how the IAT is any different than the Stroop along all the dimensions that Singal criticizes.Report
The reason for treating the IAT differently than say the Stroop test is that from the outset the IAT has been promoted as tapping into individuals’ unconscious prejudices as if it were a diagnostic tool. There are hundreds of studies where the IAT is used as an independent variable supposedly measuring unconscious biases that cause discriminatory behaviors. This interpretation is also clear from the investigators’ public statements, from how the Project Implicit website reports test takers’ “bias levels” (slight, moderate or strong, based on completely arbitrary cut-off points), and from how the IAT is treated in many popular books and articles and also many scholarly publications, even if there’s been some pushback against this interpretation. It’s like if the Stroop test had been marketed from the start as the Cookie Urge Test. The chief problem of the IAT is perhaps this greedy reductionism, the idea that a reaction time measure is a royal road to the unconscious.Report
Mazirian, I agree with everything you just said! I don’t mean to sound snarky, but I feel that your point is really exactly the one I made at the start of my first comment: Banaji and Greenwald repeatedly said the IAT can walk on water (i.e., it is a potent diagnostic instrument – a window into one’s soul) and marketed it that way. Of course the IAT can’t walk on water, as Singal usefully documents. But that should not be taken to imply the IAT is itself a failure. The IAT is just an ordinary dependent measure that joins the multitude of imperfect, semi-reliable, weakly predictive dependent measures that psychologists deploy.Report
Sorry for asking for a free tutorial, but what is the connection between the Stroop effect and eating cookies? I thought the Stroop effect is the one where it’s harder to understand the word “red” when it’s printed in green, etc.–how is that expected to relate to cookies?Report
replying to Matt Weiner below: the Stroop effect might involve inhibition of an automatic response tendency, and if inhibition is a fairly unitary phenomenon, then you might predict behaviors that reflect loss of such inhibition. It is a silly example — my point is really that the Stroop won’t predict behavior well.Report
“Until it has been established what, if anything, the IAT measures, it’s quite premature to speculate how small effects of implicit bias might accumulate into larger effects over many situations.”
This is /exactly/ the sort of inference I think we must streneously resist. At most you get it’s premature to speculate how small effects of //whatever IATs measure// might accumulate over many situations. But IB /= “what IATs measure”. And there’s plenty of easy ways to argue apparently convincingly that the effects of IB can accumulate — consistent, by the way, with the overwhelming amount of testimony of what it is like to be /subjected/ to bias from those who are on the receiving end of it.
One eample where I’m the bad guy. Recently, I pulled someone up for queue-jumping, in an unpleasant way. It struck me after that my unconscious assessment of the power-differential between the person I told off and myself may well have been what tipped me over to acting, rather than biting my tongue, which I most often do. I’m fairly confident that I rate extremely low on any explicit bias measure you care to throw at me. And of course I can multiply exapmles like this ad infinitum, which I think, if we’re honest, most white males (and many others) can. Is it implausible that /such effects/ can add up over a lifetime? Obviusly not.Report
Implicit bias is a phenomenon; the IAT is an attempt to measure that phenomenon. One cannot move from the claim that the IAT is not a valid measure of a phenomenon to any claims about the phenomenon itself, as the original post and several subsequent comments appear to (cf. Charles Lassiter’s comment above).Report
For the record I agree that many other forms of implicit bias have very strong evidence (e.g. non-Caucasian names on job applications getting lower call backs, instructors with feminine names having lower evaluations by students even in online run courses when the instructor is never seen). But I took a substantial part of the worry to be about not just IAT being an invalid measure, but the extent to which so many of us philosophers have been *taking* IAT’s to be a valid measure of implicit bias.
In my experience almost everyone confidently took IAT’s to be something relevant to understanding the extent to which such biases are pervasive in society, they way they develop, relationship to certain undesirable outcomes for certain groups and best means of remedying these outcomes. And they seemed to arrive at such a degree of confidence in the validity of IAT and often treated it as basically able to be taken for granted. Finding IAT is invalid does support a legitimate worry that if many of us thought IAT was valid, our related beliefs of equal confidence in other phenomena are also not well justified or based on an accurate understanding of the evidence (e.g. I regularly see people confidently assert that stereotype threat plays a clear explanatory role in gender achievement gaps, an inference numerous psychologists have specifically said their research does not support).Report
“non-Caucasian names on job applications getting lower call backs”
Is that really evidence of implicit bias? Couldn’t it instead be that employers consciously think that people with certain kinds of names are less likely to have the qualities they want in employees? Note also that the black name effect vanishes when the name used isn’t obviously associated with low socioeconomic status even if it’s “non-Caucasian”. It seems that many people attribute to unconscious biases effects that can be explained by conscious taste-based or statistical discriminationReport
Are you suggesting that we take the large number of people participating in these studies to have explicit bias? I take that to be an exceedingly unlikely hypothesis. It’s also one that’s been tested:
“It seems unlikely that the huge racial gap in callbacks demonstrated by Bertrand and Mullainathan (2004) could be explained by the low levels of explicit, self-reported prejudice captured by public opinion surveys cited by Tetlock and Mitchell (2009). However, one cannot be perfectly certain that the discriminatory behavior exhibited by the 1,300 employers in Boston and Chicago was a function of implicit rather than explicit racial biases. Fortunately, two subsequent studies provide more direct evidence that the kinds of race-based hiring biases identified by Betrand and Mullainathan are linked to implicit prejudice. First, in a follow-up study, Bertrand, Chugh, and Mullainathan (2005) found that scores on an implicit stereotyping task involving race and intelligence were correlated with students’ likelihood of selecting resumes with African American names, especially among participants who felt rushed while completing a resume selection task.
Second, Rooth (2007) conducted an illuminating field study using human resource personnel as participants. He examined whether job applicants were contacted for interviews and also administered the IAT to employment recruiters in a Swedish replication and extension of the Bertrand and Mullainathan (2004) study. Using either common Arab or Swedish male names, Rooth (2007) submitted a series of otherwise comparable applications for several different job openings in Stockholm and Gothenburg. The occupations were selected to be highly skilled or unskilled,
and they varied in the extent to which they were commonly held by Arabs (e.g., teachers, accountants, restaurant workers, and motor vehicle drivers). From a total of 1552 submitted applications, in 522 cases at least one applicant was contacted by the employer and invited for an interview. When only one applicant was contacted, 217 times this candidate were Swedish (42%) and only 66 times was he or she an Arab (13%); thus, Swedish applicants were three times more likely than Arab applicants to be offered interviews.”
Jost et al, 2009, “The existence of implicit bias is beyond reasonable doubt”.
Also, the names chosen are usually carefully selected as being markers exclusively of the tested variable, as in, for example, Steinpreis, Anders, and Ritzke’s original CV study. “The Impact of Gender on the Review of the Curricula Vitae of Job Applicants and Tenure Candidates: A National Empirical Study”. Admittedly, in that study, they explicitly say only that the names do not indicate either race or age (p. 515), but I think it’s exceedingly unlikely that it’d be an indicator of socio-economic status.Report
I agree with Abraham Graber. We’re looking at concerns about the IAT, not about implicit bias. None of this is any reason to doubt that bias, for example racial bias, is an important factor in human behaviour even among people with egalitarian explicit beliefs.Report
Except the IAT has been pushed as one of the supposedly most powerful instruments for detecting this bias that you confidently assert is “an important factor in human behaviour even among people with egalitarian explicit beliefs.”
If the IAT is of little predictive or explanatory value, then perhaps that in itself gives us no reason to doubt what you claim. But it also gives us little reason to believe it.Report
Fortunately, there’s PLENTY of reason to believe the claim, which by the way, is not a surprising or extraordinary one, but one that’s bang in line with the developing understanding of our mental lives over the last century or so. Again, Jost et al, 2009, “The existence of implicit bias is beyond reasonable doubt”.Report
Thanks for clearing up a lot of misunderstandings in the comment section, Ole. As someone who is currently working on a correspondence study on labour discrimination, and is involved in an reducing-implicit-bias-project, your remarks are very much on point.Report
For reconsideration of “microagressions”, see Scott Lillienfeld’s and Jonathan Haidt’s eviscerations of the concept in the latest issue of Perspectives on Psychological Science (also featuring a piece by the concept’s inventor):
For reconsideration of “stereotype threat”, see Lee Jussim:
Worries about replication w.r.t. stereotype threat are well-known and worth discussing.
But the piece you describe as “eviscerating” micro-aggressions is a hatchet piece. The authors suggest 5 features micro-aggressions “must” have and then argue that nothing has those features. As it turns out, those combined features are not any that any micro-aggressions theorist would or should impute to micro-aggressions. The piece “eviscerates” a strawman.Report
I’m surprised to see Haidt’s piece cited here, as it seems to have no empirical content whatsoever. It’s ironic that Lillenfeld calls for an immediate stop to all microaggression training until the concept can be proved rigorously, and Haidt cheerleads for this immediate stop based on a bunch of stuff which is the opposite of rigorous.* So are we to wait for rigorous social science before we change our practices, or not?
It also doesn’t instill confidence that when I followed a random citation from Lillenfeld on the idea that emphasis on microaggressions “perpetuates a victim culture” I found this nonsense, which doesn’t even attempt to provide any empirical support but merely consists of a bunch of vituperative insults (“ridiculous,” “hand-wringing,” “clearly irrational”) directed at anyone who dares to take offense at comments like “When I look at you, I don’t see color.”
*”Might entire democracies be tipped into a state of constantly rising grievance mongering, mistrust, and demands for silencing the other side? If you think American democracy is polarized and dysfunctional in 2016, just wait until the baby boomers have aged out of leadership positions and the country is run by a millennial elite trained at our top schools, which immersed them in a microaggression program for 4 years.”Report
It’s Jussim’s research to which Haidt refers when he declares that denying the accuracy of stereotypes is on a par with denying evolution, climate change, and the age of the earth, isn’t it? linkReport
Thinking it over more, this makes me actively angry. Lillienfeld criticizes the literature on microaggressions as requiring more conceptual clarity. Haidt takes this as a definitive refutation and dances on the grave of the concept. Jussim goes around loudly proclaiming that stereotypes are accurate, and Bain and Ciprian criticize this literature as requiring more conceptual clarity.* Haidt nevertheless declares the denial of stereotype accuracy to be on a par with climate-change denial and young earth creationism.
Jamelle Bouie has been warning that “heterodox academy” types are going to try to bring race science into the mainstream. Hard to see him as wrong.
*Jussim’s response seems to me unresponsive; when he says that generic beliefs are “inherently inaccurate,” I honestly don’t think he’s understood the concept of a generic belief. He also seems to get the burden of proof wrong. Sarah-Jane Leslie says that more people are inclined to assert the generic “Muslims are terrorists” than the generic “Muslims are women,” and Jussim responds that she hasn’t proven this. But if he’s going to proclaim a robust result that stereotypes are accurate, isn’t he under the obligation to show that people don’t hold this inaccurate stereotype?Report
(Since I was dragging the Telegraph for misusing quotes, I should own up that Bouie does not use the words “heterodox academy”; I was intending that as scare quotes. He talks about a push for “intellectual diversity” using scare quotes of his own.)Report
I’m glad you indicated that the work done on implicit bias in philosophy has focused almost exclusively on gender and race. I discuss the limitations of this sort of analysis in my forthcoming book and recently posted at the Discrimination and Disadvantage blog about this issue. You can read that post here: http://philosophycommons.typepad.com/disability_and_disadvanta/2017/01/are-there-disabled-philosophers-on-the-faculty-of-your-department.html
Happy New Year,
I have now written a post at the Discrimination and Disadvantage blog entitled “Implicit Bias and Disabled Philosophers.” In the post, I excerpt the aforementioned passage from my forthcoming book. You can read the post at our blog here:
My understanding is that these effects are quite strong by comparison. I am familiar with this review http://www.sciencedirect.com/science/article/pii/S0891422213004903 but perhaps you could recommend other results on implicit disability bias and the iat?Report
thanks for your response to my comment. Would you please ask it over the Discrimination and Disadvantage blog so that I can respond to it there? And, in doing so, would you please clarify what “these effects” refers to? It’s not clear to me whether you are referring to implicit biases or something that I mentioned in my blog-post. Thanks!Report
Sorry for the confusion, I was referring to your comment concerning the effect of negative implicit attitudes and disability.Report