Philosophers and Cognitive Bias
Should the order in which a person considers thought experiments affect one’s responses to them? Rationally, it seems no. Yet the “order effect” is well-confirmed. What about philosophers? We are supposed to have a kind of expertise in handling thought experiments and are known (?) for thinking clearly and rationally; certainly the content of our judgments are not susceptible to something as trivial as the order in which we are prompted to make them! Um…..
We were unable to find any level of expertise at which the order effects were detectably reduced. Nor did adding a reflection condition appear to reduce the order effects…. If anything, the trend appears to be toward larger order effects with increasing expertise.
That’s Eric Schwitzgebel, reporting on a recent study he conducted with Fiery Cushman. It is but the latest piece of evidence that not just the lay folk, but also those who have long understood themselves as experts in rationality, those members of the discipline of philosophy, are no less resistant to cognitive biases.
The significance of these kinds of findings for philosophy is, in my view, major, and has yet to be sufficiently appreciated.
In case one is wondering whether *all* cognitive biases are equally robust across levels of (philosophical) expertise, it is worth mentioning that education/selection in philosophy predicts significant reductions in the likelihood of committing a certain kind of cognitive error on a certain cognitive test (Livengood, Sytsma, Feltz, Scheines, and Machery 2010). I have been able to replicate this result.Report
if the neoliberalist borg isn’t the end of academic analytic philosophy the nascent but ever more concrete research on cognitive-biases may well make other outsiders come to understand them as being as hopelessly outdated as I do (a world better than left behind if you will), but of course the cog-biases of those inside the faith matrix will largely make them resistant to such factual corrections, so round and round we go doing as Stanley Fish noted what comes naturally.Report
Maybe I don’t understand what ‘order bias’ is, but I’m not sure I see why it’s irrational to exhibit it. When I teach ethics, I always start the students out with the plain Jane version of the trolley problem, and then gradually modify it (building up to the surgery case). I’m sure I wouldn’t get the same result if I presented the cases in a different order, and I don’t think this is due to the irrationality of my students. Evaluating a thought experiment, presumably, is about recognizing certain of its features as salient to the question at hand, others not. And surely it’s the case that we often have to be ‘primed’ to be able to recognize something as salient (this is just how the Socratic method works–think of the discussion with the slave boy in the Meno). I don’t think there’s anything irrational about this. It’s just part of what it means to be finite knowers who develop our thoughts discursively over time, rather than all at an instant.Report
This may be of interest: http://philpapers.org/rec/MIZTAAReport
“It’s just part of what it means to be finite knowers who develop our thoughts discursively over time, rather than all at an instant.”
But these aren’t people who developed their thoughts “all at an instant,” in response to the (ordered) thought experiments. Rather, they are professional philosophers — people who have already thought about the trolley problem, and presumably should have come into the experiment with opinions about it. This is what is so disturbing to me: these people should have had previously formed beliefs which they simply reported to the experimenters. But apparently that’s not what happened.Report
Perhaps I’m too cynical, but as I understand much of intuition based ethical work, such as trolleyology, the process is basically: we individually and collectively have certain basic evaluative commitments (nicer way of saying, shared moral prejudices) that are not fine tuned to every instance or circumstance. By considering problematic examples, we expose the inconsistencies among persons in their collective application of those shared basic commitments and we expose inconsistencies in our individual application of those shared basic commitments. Then we fine tune them to overcome these inconsistencies, so we will no longer have to encounter application problems that might force us to question our shared basic evaluative commitments.
In this respect, it seems to me to be the opposite of Socratic method: if Athens had just made its moral prejudices more fine grained and consistent, Socrates wouldn’t have been able to get any traction with his pernicious questioning.
The question of order is a question of figuring out which of our moral prejudices is the most basic in terms of our attachment to it and the dependency of other prejudices upon it. We want to order examples in a way that preserves and prioritizes them, so when we iron out contradictions we get to keep our favorite prejudices.Report
They have an earlier paper, too, which is really excellent. It shows, not only that philosophers are susceptible framing effects, but the magnitude of those effects can be greater than those of non-“experts”. (For the original foil, see Tim Williamson, “Philosophical Expertise and the Burden of Proof”, *Metaphilosophy* 42.3 (2011). For the reply, see Schwitzgebel and Cushman, “Expertise in Moral Reasoning”, *Mind & Language* 27.2 (2012)).Report
I think this again demonstrates that expertise does not translate itself into context-insensitiveness. In fact, for other domains of expertise, such as sports or playing a musical instrument, experts show just as much context-sensitiveness as non-experts, so why would philosophy be an exception? I also don’t think it’s irrational to be sensitive to context. The fact that philosophers are sensitive to context highlights the epistemic role that the method of cases plays (if things like the context didn’t matter, the method of cases would be a far less effective tool to get us thinking about difficult philosophical concepts). I have a recent paper where I challenge the assumption in some x-phi that expertise is a matter of context-insensitive responses, and suggest some other ways in which philosophers could be experts: http://www.tandfonline.com/doi/abs/10.1080/00048402.2014.967792#.VEAVC4t4q14Report
Helen, I am surprised to see you write of “the assumption in some x-phi that expertise is a matter of context-insensitive responses”. I don’t think that there is any such assumption, certainly not in the work I am familiar with. Rather, the concern is with a sensitivity to factors (contextual or otherwise) that we do not expect to the truth in the relevant domain to be sensitive to. So, if one thinks that the order of cases actually is a determinant of what is or isn’t true about (say) the trolley cases, then it is not a problematic sensitivity. But if one thinks that the order of cases is irrelevant to what is or isn’t true about those cases, then that is a problematic sensitivity.
Compare to: we want baseball umpires who are calling balls and strikes to be _appropriately_ sensitive to various aspects of the situation in play, in particular, the location of the pitch and the height and stance of the batter — because that’s what is supposed to make a pitch a ball or a strike. But we would find them _inappropriately_ sensitive were their calls influenced by contextual factors like the color of a team’s uniforms, or whether they are looking at a 3-0 or 0-2 count. (This turns out to be a real bias! See Green & Daniels, “What Does it Take to Call a Strike?”
I don’t know of anyone who articulates the anti-expertise arguments in terms of context-sensitivity simpliciter. If someone does — including any of my own past time-slices! — I would join you in disagreeing with them. But I think in general we have been pretty good about making explicit this issue of truth-tracking vs. non-truth-tracking factors.Report
Jonathan: I did not mean to suggest that x-philosophers don’t make such distinctions between appropriate and inappropriate context-sensitivity, but it is unclear to me whether (1) one can make a clear-cut distinction between what is appropriate and inappropriate. For instance, is the order of presentation of scenarios always epistemically irrelevant, or ought it to be? I did not address (1) in my paper and I’d need to think more about it, but it seems to me that humans often make moral decisions based on earlier experiences, so it is unsurprising that when we make mock-decisions about pushing or switching, this influences our later decisions of situations that are analogous. To the extent that it doesn’t seem weird or inappropriate in everyday situations to let past events matter, it seems that the order of presentation can be epistemically relevant for judging the permissibility of particular actions.
(2) Even if one can make such a clear-cut distinction between appropriate and inappropriate context-sensitiveness, experts seem to continue to be influenced by inappropriate contextual factors (see the example you provide, there are others).
What the x-phi literature seems to demonstrate is that whatever philosophical expertise is, it isn’t context-sensitiveness (broadly or narrowly construed). This is another motivation to find a new way to operationalize expertise in philosophy differently, or to look for different factors in which philosophical expertise might be apparent.Report
These results are interesting of course, but I’m curious about what exactly the view of philosophical method that is supposed to be refuted by these (and other x-phi) results is. I am by no means an expert in the area so this is a genuine question.
I can imagine a view that goes something like this. Intuitions about particular cases stand to philosophical theorizing roughly as empirical observations stand to scientific theorizing. Empirical observations give us data about particular instances of a phenomenon we care about, and then we seek a theory that explains these data and meets various other constraints. So, analogously, in philosophy intuitive judgments about particular cases would be taken as data, to be explained by subsequent theory.
Such a view really is in trouble if intuitive judgments about cases are fickle and unreliable. But is this a view that anyone has really held?
To me it looks rather like a strawman: it certainly does not correspond to what appeals to “the natural light” seem to do in rationalists like Descartes (or Plato for that matter); and it doesn’t seem to capture contemporary “reflective equilibrium”-type views either (it makes no sense to revise your empirical observations in light of subsequent theory, and yet that’s part of what reflective equilibrium is supposed to be about). Am I missing something?Report