Josh Knobe holds appointments in Yale’s Department of Philosophy and its Cognitive Science program. He has an office in the Psychology Department there and he works with both philosophy and psychology students. In a recent interview, he remarks on the cultural differences between the disciplines of philosophy and psychology:
It has been fascinating to experience these two quite different cultures up close. The two disciplines differ in numerous ways; and I think that each of them has a lot to learn from the other. I’ll focus here on just one difference that strikes me as especially important.
Within philosophy, there is an almost absurd value placed on intelligence. Just imagine what might happen if a philosophy department were faced with a choice between (a) a job candidate who has consistently made valuable contributions in research and teaching and (b) a candidate who has not made any valuable contributions in either of these domains but who is universally believed to be extraordinarily smart. In such a case, I fear that many philosophy departments would actually choose the latter candidate.
In psychology, it is exactly the opposite. When people are trying to decide whether to hire a given candidate, the question is never, “How smart is she?” Instead, the question is always, “What has she actually discovered?” If you haven’t contributed anything of value, there is basically no chance at all that you will be hired just for having a high I.Q.
This cultural difference results in a quite radical difference in the atmosphere that one finds in graduate education. Philosophy students experience constant anxiety about whether they are smart enough. Psychology students also experience a lot of anxiety, but it is about a completely different topic. They have this ever-present sense that they absolutely must find some way to make a concrete contribution to the field.
This aspect of our disciplinary culture really serves to shape the character of our everyday interactions. In philosophy, there is so much concern about whether the thing that one is about to say is smart or not smart. As a result, philosophers often self-censor. They feel unable to actually engage with the philosophical question at hand because they are too busy thinking instead about what the things they say will reveal about their intellectual abilities. In psychology, the situation tends to be quite different. The most common concern is not whether the thing that one is about to say is smart or not smart but rather whether it will actually help to make progress on the project.
I think that this is one area in which the culture of philosophy could potentially do with some improvement, but to be honest, I don’t have any very good ideas about precisely how we might go about changing things. Of course, the simplest thing that we could do would be to call on individual philosophers to change their behavior. We could say, “When you are working on a philosophical question, don’t think about whether what you are saying is smart, or whether it is creative, or whether it shows a mastery of the existing literature. Just think about the philosophical question itself and try to make real progress on it.” Yet, it seems that there is something a bit glib about this response. Given the way things are set up at present, it would take extraordinary courage for an individual philosopher just to spontaneously stop caring about looking smart and start focusing only on making genuine intellectual progress. If we are going to make this sort of individual change possible, it will presumably require some larger change in the incentive structures that govern our discipline as a whole.
The whole interview, conducted by Shelley Tremain as part of her Dialogues on Disability series at the Discrimination and Disadvantage blog, is interesting, including some details on Knobe’s work habits, as well as some data about experimental philosophy: it turns out the stereotypical view that experimental philosophy is about critiquing analytic philosophers’ deployment of intuitions is “a gross mischaracterization… [and that] only 1.1% of the studies aimed to show that appeals to intuition are in some way unreliable.”