Philosophical Conflicts of Interest


As the discussion of funding in philosophy and its disclosure continues, it might be worth considering some related questions, prompted by this tweet from John Christmann, a graduate student in philosophy at the University of Colorado, Boulder:

This is a good example of what could be called a “philosophical conflict of interest.”

Here’s an attempt to get clearer on the concept:

A philosopher may have a philosophical conflict of interest when the philosopher is (A) engaged in philosophical activity (B) in regard to some specific philosophical thesis or set of ideas; and, because of something having to do with this specific thesis or set of ideas, (C) that activity is reasonably construed as being in the interests of the philosopher (excluding the interest one has in coming to hold true beliefs).

(For some clarification of this conception of philosophical conflicts of interest and some reasons in favor of it, see Note 1 at the bottom of this post.)

If this is our conception of philosophical conflicts of interest, what, if anything, should we do about them? Declare them?

There seems to be a consensus in some parts of academia (and perhaps emerging in philosophy) that potential conflicts of interest that involve some prospects for financial or professional benefit should be declared. So, if an academic’s work is funded by a grant, it seems the balance of considerations speak in favor of the academic disclosing in that work the name of the granting institution.

But what about the more personal conflicts of interest—the ones that arise from the personal beliefs of the researcher about the question, or from one’s political commitments and activities? Are they to be declared? And how close a connection does there need to be between the idea under discussion and the putative benefits to the philosopher?

Christmann’s example—should those arguing against anti-natalism declare that they are parents, if they are?—is a good one to work with. On the flip side, we could ask, should those arguing for anti-natalism declare that they are infertile, if they are?

Here are some others:

Should those arguing for free-will libertarianism declare that they are Christians, if they are?
(And should hard determinists disclose that they have criminal pasts, if they do?)
Should those arguing for certain forms of material egalitarianism declare that they volunteered for the Sanders campaign, if they did?
(And should those arguing against egalitarianism declare that they had a younger sibling they were always forced to share their stuff with including their favorite X-Wing Fighter toy which their sibling broke and never replaced, if they did?)

The more examples of this sort we come up with—and you’re welcome to contribute your own in the comments—the more ridiculous the idea of declaring these more personal conflicts of interest sounds, right? Why?

It’s not that these psychological and otherwise more personal interests are less common or less motivating than financial interests. Rather, at least part of the explanation is that the source of such potential conflicts may concern very intimate details about one’s personal life, and considerations of privacy (and practicality) speak against norms demanding disclosure of them. We tend to believe that we shouldn’t have to divulge certain details about our personal lives to engage in our professional activities.

Some might object to calls for disclosure on different grounds: philosophical work should be judged strictly on the basis of the soundness of its arguments, assessment of which does not involve knowing who produced the arguments, let alone which biases or inducements they might have had. (Note that this speaks as much against the need for financial disclosures as it does against personal ones.)

Another possible ground for objecting to calls for disclosure is that they are insulting or disrespectful; they psychologize philosophers, treating their work as the effect of psychological causes distinct from the reasons they themselves adduce for their views.

I feel the pull of these three objections—the first, in particular. However, I do think we need to be careful about the latter two.

Yes, let’s focus primarily on the arguments themselves. But let’s also acknowledge that, as consumers and critics of philosophy, we are not omniscient. We are not ideal assessors of the soundness of arguments. Rather, we routinely rely on various cues “external” to arguments to help us assess whether we should accept them (or even pay attention to them in the first place).

Yes, let’s resist psychologizing each other, but let’s not ignore psychological realities. We should not pretend that philosophers are special people immune to various irrelevant psychological forces (we may not even be better than the average person in this regard).

So what should we do? I don’t think a call for personal disclosures is practicable, nor, on balance, desirable. But that doesn’t leave us with nothing. As authors, as usual, we should be on guard about our own biases. Here’s one possibly helpful heuristic: if it would be good for you if some thesis were true, your default position should be that it’s false.

As readers, it’s trickier. Noticing philosophical conflicts of interest, even when not disclosed, may help point us to towards gaps, blind spots, or other problems in the philosophical work we’re reading. But intellectual responsibility and interpersonal respect should prevent us from overly cynical readings of each others’ work, or rejecting a thesis solely on grounds that the person arguing for it may have a conflict of interest.

Discussion welcome, as usual.


NOTE 1:

In the post I suggest the following way of understanding a philosophical conflict of interest:

A philosopher may have a philosophical conflict of interest when the philosopher is (A) engaged in philosophical activity (B) in regard to some specific philosophical thesis or set of ideas; and, because of something having to do with this specific thesis or set of ideas, (C) that activity is reasonably construed as being in the interests of the philosopher (excluding the interest one has in coming to hold true beliefs).

Below is some explanation of what I mean by this and some considerations in its favor.

For (A), I have in mind things like arguing for (or against) some thesis, or raising (or rebutting) an objection to an argument, or framing a philosophical problem or question a certain way, or treating evidence a certain way, etc.

I include (B) so that the concept doesn’t yield the judgment that philosophers have a conflict of interest whenever they’re doing any philosophy because, since it’s good to do philosophy, it’s always (at least in one way) in philosophers’ interests to do philosophy. Same goes for substituting for “it’s good to do philosophy” something more practical, like, “it’s good for you to do your job.” (B) also is meant to exclude the judgment that philosophers have a conflict of interest in, say, arguing for an idea they believe because it is good for people to have good arguments for their ideas. I take it that the worry behind the idea of a philosophical conflict of interest is about possible motivated reasoning (conscious or not) for or against a specific philosophical thesis. A philosopher’s desire that some thesis be true, or the benefits that would accrue to the philosopher if some thesis were true, are typically irrelevant to whether the thesis is true, so we should be on the lookout for the undue influence of desire and benefit on the activities philosophers engage in to determine its truth.

Now let’s turn to (C). By “interests of the philosopher” I have in mind various ways in which a person doing philosophy may benefit from doing work in regards to some specific thesis.

One category of benefit is psychological. It may turn out that my life makes more sense to me, or that I am happier, or feel more justified in my choices, or less disturbed (say, by cognitive dissonance), if some thesis is true. I take the example Christmann gives in his tweet about the parent arguing against anti-natalism to be an example of this. Arguing for political theses to which one is antecedently committed may provide the same or related benefits. There is also the more familiar material benefit. Perhaps by defending some position I increase my chance of winning a prize with a large monetary award the likes of which I would otherwise be ineligible for.

But not all benefits are suspect. Suppose it is good for me to have true beliefs. Suppose, further, that some specific thesis, T, is true. And finally, suppose that by arguing for T, I would come to believe T. Arguing for T, then, would be in my interests. But we wouldn’t want to say that this itself represents a philosophical conflict of interests.

So I think we would have to understand the relevant “interests” in (C) to be those that are distinct from the interest one has in coming to hold true beliefs. And this specification doesn’t seem arbitrary, as the interest one has in coming to hold true beliefs is sufficiently close to what we take the proper aim of philosophical activity to be.

Further, I refer to the activity being “reasonably construed” as being in the interests of the philosopher. This is because the philosophers themselves need not be consciously aware of benefits for them to contribute to a conflict of interest.

Taken together, does this capture the idea? 

Subscribe
Notify of
guest

35 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Alfred MacDonald
Alfred MacDonald
6 years ago

people who are physically unattractive and sexually timid should declare conflict of interest in arguing against the concept of objectification

Ben Davies
Ben Davies
Reply to  Alfred MacDonald
6 years ago

So weird, I was just about to say the same about those who are in favour of objectification.

Max DuBoff
Max DuBoff
6 years ago

The first example, given by Christmann, is quite good: our hypothetical philosopher is revealing that he has specifically acted contrary to his own argument. This isn’t a conflict of interest so much as an issue of integrity.

But your subsequent examples strike me as deeply flawed: anti-natalism as a philosophical position should in no way be tied to infertility (disclosure: I’m an anti-natalist and, for all I know, fertile). And besides, one can be a card-carrying anti-natalist and also a parent via adoption.

Further, Christians need not be libertarians to avoid hypocrisy; just look at Calvinism, or Leibniz, or many other examples of Christian hard or soft determinists.

I could go on and explain why the other examples are flawed, but root of the problem seems to be that many of the potential “conflicts” you raise are actually only notable if they directly impact the position argued for or if the philosopher holds hypocritical views. Thus, a hard determinist should reveal that she has committed a crime if that fact is part of her argument for hard determinism (and if she is comfortable doing so). For the sake of honesty, a Christian libertarian should mention his convictions if they are part of his argument for libertarianism (e.g. if he decides that soft determinism could potentially be true but that, because of the truth of certain doctrine, there is free will and only libertarian free will could suffice).

But the hypothetical Sanders volunteer who argues for material egalitarianism has no conflict of interest; she actually has a rather consistent set of ideas! We should place value on living our lives according to our respective philosophies, and indeed we should give others the benefit of the doubt and assume that they follow their own beliefs.

TL;DR: Akrasia is the only conflict of interest here.

Max DuBoff
Max DuBoff
Reply to  Justin Weinberg
6 years ago

Thank you for the detailed reply, Justin. I love the site, but this is my first time commenting, and it’s nice to have the opportunity for some constructive discussion.

In the anti-natalism case, then, I’d say (as I maintained above for the hard determinist case) that the anti-natalist should disclose his fertility issue if it factors into his actual argument (and if he’s comfortable sharing it; I do identify with the privacy concern you mention in the article). Such a disclosure seems like it’d be important for academic integrity and for allowing others to evaluate the argument on its merits. In my initial reply, however, I assumed that the anti-natalist’s argument does not itself rely on his infertility; if his infertility prompts him to come up with a sound argument that otherwise has nothing to do with his situation, I would not consider the case a conflict of interest. I don’t deny your point that we can and do use external cues, but this sort of personal experience doesn’t seem particularly relevant when evaluating the argument.

With regards to your broader point about my assumptions, I think that, for a conflict of interest to exist, there needs to be the presumption that the interest will interfere with the thesis. In most cases in philosophy where only (or mainly) ideas are at stake, I don’t think that criterion is met. Grant money complicates matters, though, in large part because people think that money from a group like Templeton (as the recent discussion on here illustrated) at least might bias a philosopher (as money from the sugar lobby certainly might bias a biologist). Ironically, however, our human propensity for akrasia (and people’s willingness to argue for one idea and then live differently) actually seems to reduce our suspicion that others might have conflicts of interests.

Ultimately, as you say, we’re not omniscient, but I’m not sure we have another good criterion besides arguments themselves. In most areas of philosophy there’s also not too much of an issue with manipulating data, a common problem in scientific situations with lots of grant money on the line. And as a general principle, without data to the contrary, I’m inclined to believe that philosophers are intellectually honest and generally put forth legitimate arguments regardless of their motivations.

Peter
Peter
Reply to  Max DuBoff
6 years ago

There has been a misunderstanding in this and several other comments. The example is of someone arguing *against* anti-natalism (so, *in favor* of the permissibility of having children).

Rick
Rick
6 years ago

People preferring not to be murdered should declare conflicts of interest if they endorse moral realism.
People who raise the demandingness objection to consequentialism should declare conflicts of interest if they’re hoping to stay in academic philosophy.
People who argue that metaphysical talk is meaningless should declare conflicts of interest if they hated Sunday school.

And really, I’m not sure the proposed heuristic is a very useful one. One reason is that the “good for” metric seems hard to figure out in a lot of ways. The other is that most philosophical arguments ultimately boil down to some kind of intuitive conflict (does Mary learn something when she sees red, or not? does suffering matter morally, or not?), and it seems that our preexisting views—while they should be examined—are a reasonable source of intuitions.

Heath White
Heath White
6 years ago

Surely the most obvious “conflict of interest” is that any philosopher arguing for P has a professional interest in P becoming widely accepted or at least widely discussed, especially on the basis of his or her argument for it.

In general, there is too little at stake in professional philosophy publication to worry about declaring conflicts of interest. If you work for a think tank, with obvious immediate policy implications or large amounts of money at stake, things are different.

agradstudent
agradstudent
6 years ago

I think there are some very tangible “conflicts of interest” when it comes down to material benefits. Examples:

– “I argue against affirmative action [conflict: I’m an untenured white male]”

– “I argue for affirmative action [conflict: I’m an untenured woman of color]”.

– “I argue for the importance of the philosophical question X [conflict: my entire career is staked on X, all my publications and expertise are exploring various shades of X]”. In The Other Blog, just a few days ago, there was a discussion about such X (e.g. X=Grounding, X=Formal Epistemology).

Should such conflicts be *declared*, though? I don’t know.

Ryan Muldoon
6 years ago

This is, on balance, a pretty bad idea. In part because it suggests that we should understand the “final” outputs of our work as a single paper, rather than the broader discussion across multiple pieces. Also, if you’ve got some particular reason for how you came to your view (and I’m sure that you do), you may or may not have accurately introspected about it. No doubt if your particular history biases you in a direction of certain arguments, it’s also going to do some work in how you understand your own bias. Such reporting would be a massively noisy signal.

Instead, it sounds like a much better idea to work to ensure that the broader philosophical community is composed of people with lots of different backgrounds and perspectives. What is hard to do individually is much easier to do socially. If one’s argument relies on a wonky set of premises, someone differently situated can easily point that out in a reply. If they look good, that person can build on that argument.

More worrying to me is that there’s an assumption in these recent discussions that there’s some neutral position that we should aim to adopt, and it’s only insidious sources of bias that prevent us from getting there. This is just false. There are plenty of good reasons for why people will understand (philosophical) problems in different ways, want to use different tools, and come to different conclusions. For a more detailed argument of this sort, see “Disagreement behind the veil of ignorance” (https://link.springer.com/article/10.1007/s11098-013-0225-4) or for a much longer treatment, _Social Contract Theory for a Diverse World: Beyond Tolerance_ (https://www.routledge.com/Social-Contract-Theory-for-a-Diverse-World-Beyond-Tolerance/Muldoon/p/book/9781138681361).

Joshua Reagan
Joshua Reagan
6 years ago

The Christian/libertarian example isn’t apt. If one takes libertarianism to be part of Christian doctrine, then it would simply be the case that Christianity entails libertarianism. Surely when arguing for X one need not declare other beliefs that bear on X as a conflict of interest. That’s like a logician saying, “Given: (A -> (B -> B)) and A, I accept (B -> B). Full disclosure though, I also accept (B -> B) because it’s a logical truth.”

Offering an argument for X shouldn’t be taken to exclude the possibility that one has other epistemically relevant reasons for accepting X.

Joshua Reagan
Joshua Reagan
Reply to  Justin Weinberg
6 years ago

You’ve missed the point of my response. My point doesn’t just hold of some X that entails some Y. It holds just as much of some X that provides evidence of some Y. It is important to distinguish epistemically significant reasons for accepting Y from epistemically irrelevant reasons.

Unknown Philosopher
Unknown Philosopher
6 years ago

As Max DuBoff pointed out in the second comment, the original example involves no conflict of interest–not even an apparent one! It’s a simpler issue of integrity, much like Peter Singer’s spending gobs of money to provide care to his aging mother.

Here’s what I take to be a salient example: philosophers who engage in forms of public philosophy should disclose that they do when they argue that engaging in public philosophy should be rewarded. No hidden commentary on anyone in particular here–in fact, my general sense is in the vast majority of the relevant cases, the individuals do so declare.

--bill
--bill
6 years ago

A good book on how funding (especially defense funding) has effected philosophy is “How the Cold War Transformed Philosophy of Science: To the Icy Slopes of Logic” by George A. Reisch. This might help ground some of these discussions.

Maja Sidzinska
6 years ago

One complication is that people, philosophers included, are susceptible to becoming attached to/attracted to the arguments or ideas they have put forth on the very basis of having put them forth. I think in cog-sci this is called “post-purchase rationalization” and it is a form of cognitive bias. If this is true, or to the extent it is true in any given case, then we may all have conflicts of interest regarding any argument or idea we put forth.

akreider
akreider
6 years ago

I don’t fully understand this worry:

“Yes, let’s focus primarily on the arguments themselves. But let’s also acknowledge that, as consumers and critics of philosophy, we are not omniscient. We are not ideal assessors of the soundness of arguments. Rather, we routinely rely on various cues “external” to arguments to help us assess whether we should accept them (or even pay attention to them in the first place).”

Of course, we aren’t omniscient identifiers of soundness (or other good-making features of arguments). But the only way I could see to have an external cue be useful in the way you describe is if we are to accept (or add or subtract weight to the) premises or conclusions based on the authority of the author. This seems almost never the case. Perhaps it might matter with regard to “this is my experience” sorts of claims. But these are mostly uninteresting philosophically. At the least, we’d want to know how representative such experiences are. An interesting claim should itself be argued for, be defended by citation, or be identified as a brute intuition against which the reader should test her own intuitions.

While it’s also true that we have limited time and energy with which to determine those arguments likely worth examining, surely this isn’t the case for, say, journal editors. They can, and in fact are tasked with, looking at the quality of the argument.

The downside here seems much greater – increasing the temptation of circumstantial ad hominem.

nicholesuomi
6 years ago

The first (humorous) one that came to mind as I scrolled down: “Potential conflict of interest: I argue for moral error theory, and I’m a jerk.”

I have heard (serious, philosophical) arguments made that ethicists skew realist because it’s better for their continued employment as ethicists. If ethics talk turns out to be hooey, then that’s rather bad for someone whose project is figuring out what’s good and evil. So, the problems that plague every positive moral theory must not be fatal. (Of course, I’m not saying this argument defeats moral realism and similar positions that are good for ethicists. If those positions are wrong, it does appear to be an explanation of their prominence anyway.)

Perhaps that line of reasoning works in aggregate elsewhere. E.g. “While the majority consensus on X is P, I assert ~P and explain the popularity of P by the majority’s commitment to Q.” Where Q might be some other belief (e.g. Christians who take libertarianism to be critical to the escape from the Problem of Evil, P=libertarianism, Q=Christianity as such) or some other interest. The ethicist argument in the previous paragraph has some commonalities to research done with financial backing from an institution with an agenda. “If I don’t say something right-wing, I don’t get paid” is awfully similar to “If I don’t say something in support of morality, I don’t get paid.” (I don’t mean to pick on ethicists, either. Metaphysics has had its non-cognitivists, but the overwhelming view among metaphysicians appears to be that metaphysical claims are meaningful.)

Whether these are useful at the individual level is much less clear. But, the presence of any individual person or paper also doesn’t seem to have so much weight. That some high proportion of some group agrees on something can have some weight on its own. That one or two people think something can be dealt with entirely at the argument level.

David Wallace
David Wallace
6 years ago

I have a slightly different (though compatible) thesis to Justin as to why financial disclosures are a special case.

Yes, probably people are being pulled in various directions by extraneous factors: popularity of a view, identity of their supervisor, external factors like money or children. But we know that – not necessarily in every specific case, but in aggregate. The case for transparency in funding (to pick up themes from the other discussion) isn’t really about its (hypothetically) pernicious effect on a given person; it’s about its (again, hypothetically) systemic and hidden influences on the debate in aggregate. If a given person who advocates a theistic solution to fine-tuning is funded by the Templeton foundation, I don’t care. If everyone working on fine-tuning has Templeton funding, or if most Templeton-funded people think fine-tuning has a theistic explanation and most non-Templeton-funded people think otherwise, that’s worth knowing.

(Actually I think Templeton’s funding in this particular area is basically beneficial. But if so, transparency (which I think basically does exist here) will demonstrate it.)

Robert Gressis
Robert Gressis
6 years ago

I’d be interesting in seeing crossover between philosophical positions and the big-five personality traits.

Joe Rachiele
Joe Rachiele
6 years ago

In a related issue, I sometimes wonder if social scientists researching a political question should declare their political leanings.

It seems like it could be helpful to have this info when reading a meta-analysis. Suppose I read a literature review that gives authors’ different results for how gun ownership is associated homicides. Suppose I then find out that 30 of the authors lean left and one leans right. I would then guess that the evidence in this lit review overestimates the effect of gun ownership on homicides.

Or is that too naive?

Joshua Reagan
Joshua Reagan
Reply to  Joe Rachiele
6 years ago

“social scientists researching a political question should declare their political leanings”

This would be almost completely superfluous. Survey after survey shows that the social sciences are utterly dominated by those with left-wing views. One study, picked almost at random, shows registered Democrats outnumbering Republicans at a ratio of 17.4:1 in top US psychology departments. (https://www.insidehighered.com/news/2016/10/03/voter-registration-data-show-democrats-outnumber-republicans-among-social-scientists) This isn’t a perfect measure, because there are independents on both the left and right who don’t align with either major party, but there are so many studies on this that the basic trend is clear.

Joe Rachiele
Joe Rachiele
Reply to  Joshua Reagan
6 years ago

I don’t see how knowing a *discipline’s* ratio of Dems to Republicans makes knowing the political leanings of individual authors
almost superfluous. If I discover John Lott was a Republican before his research on gun ownership and violence, I’m tentatively suggesting I should become more confident that his research is biased rightward.

I also can’t infer the ratios of Dems to Republicans in well-defined pysch literatures just from knowing this ratio for psychology as a whole. There could be a lot of heterogeneity between different psych literatures, with some much higher than 17.5 to 1 and some much lower. How information about the political leanings of authors affects how we interpret meta-analysis may vary depending on how skewed this ratio is.

Joshua Reagan
Joshua Reagan
Reply to  Joe Rachiele
6 years ago

Your response seems to boil down to: “Defeasible evidence isn’t evidence.” Obviously finer-grained analysis can reveal exceptions. Who would argue otherwise? But let’s not pretend not to know that the overwhelming majority of social science research is done by people with left-wing views.

David Wallace
David Wallace
Reply to  Joshua Reagan
6 years ago

But Joe Rachiele’s objection was to your statement that it would be “almost superfluous” to get the data on the political leanings of a given social-science paper, not to the weaker claim that absent that data, the default assumption is that they’re probably left-leaning.

Joshua Reagan
Joshua Reagan
Reply to  David Wallace
6 years ago

If I’d simply said “superfluous” his objection would be apt. The “almost” is there because the judgment I’m talking about is defeasible. The strong language is there because 17.4:1 is absurdly lopsided. I fail to see any inaccuracy in my initial comment.

Aspasia
Aspasia
6 years ago

Solution! Here’s a covering caveat: The author of this book/paper/essay likely believes its conclusion. Therefore, this paper is likely to give reasons in favor of that conclusion. Assess the evidence with this fact in mind? I don’t get how this is even an issue. Of course, the papers that are written are most likely to be written by people who believe the conclusions (though sadly that’s not always true in philosophy), and yes, they will believe those conclusions for reasons. How is this remotely close to being a conflict of interest issue? Having beliefs is having a conflict of interest?

X
X
6 years ago

I think about this question when I debate animal rights with philosophers who eat meat. My sense is, yes, the conflict of interest makes the discourse motivated.

Alastair Norcross
Reply to  X
6 years ago

X, I’m surprised it took so long for this example to come up. When I read the post (a few minutes ago), that was the first example that came to mind. I have met philosophers who have cheerfully admitted that their liking for meat has driven their assessment of arguments on the other side, their tendency to find some pro-meat arguments plausible, and even their unwillingness to expose themselves either to arguments against meat-eating or to empirical evidence about the treatment of animals. I’m never sure whether I should think more highly of these honest philosophers than of their more numerous colleagues, who have somehow convinced themselves that the case for meat-eating is philosophically respectable, without recognizing the massive dose of self-interest infecting that judgment. There is, of course, the possibility for the same thing on the other side, but I think it’s far less likely. Most philosophers I know who are vegan or vegetarian on ethical grounds grew up eating and enjoying meat, and only reluctantly accepted the arguments against it. I spent many years desperately trying to defend the consumption of (an ever-diminishing range of) animal products on ethical grounds.

Alan White
Alan White
Reply to  Alastair Norcross
6 years ago

I also first thought of carnivore-ethics as an example as I read this, though a complicated one. I have long taught Singer-style arguments very forcefully as a reason to advocate for (at least) treating neural-complex non-humans as within the sphere of morality. These arguments I think are cogent, and have influenced my own omnivorous trends to avoid meats in cases that I think were taken from sources of utterly exploitative cruelty. But I cannot embrace full removal from eating meat, and treat it as a defect in my character. I also thus use this as an opportunity to teach students about weakness-of-will, and, even stronger, the possibility of teaching principles even though the one teaching is an admitted hypocrite. I always admit to this in this case, and offer an example of a dear friend and colleague who was convinced early in her career by Singer, reformed to veganism as a consequence, and to her dying day advocated for animals. There are after all many in this world who profess hard-lived beliefs but who ultimately do not live up to them–but at least I can acknowledge my own hypocrisy, and point to an example better to follow than me.

Nicolas Delon
Nicolas Delon
Reply to  Alastair Norcross
6 years ago

Needless to say I agree with Alastair, but I can hear people objecting:

“Wait a minute… these vegan philosophers act like they’re so unbiased and impartial, but some of them are also *advocates*! They go to protest and stuff, hand out vegan leaflets, contribute to animal charities … They’re motivated by their quasi-religious beliefs to argue for their diet. It’s the new orthodoxy”

Anyone who has taught animal ethics knows there will at least be some suspicion that they might be trying to do advocacy rather than proper philosophical instruction. Of course the objector is forgetting that many such advocates are motivated by arguments or facts or stories (or a combination) in the first place, and that they’re willing and equipped to counter the meat-eating arguments. As for teaching, they’re simply overlooking the fact that we also review arguments by famous omnivores, whom I’ll refrain from naming here, and most students, by themselves, come to the conclusion that they’re terrible arguments.

I’ve had to include my links to animal organizations in a disclosure of interests before contributing to a report on animal consciousness — which is totally fair. I have yet to wait for the backlash, but when it comes, rest assured that my concern for animals will be taken as evidence of my bias. This is just anecdotal evidence but I’ve heard similar things so many times (at least in France) that I believe it’s probably quite common.

So, even if Alastair is probably right that the possibility of conflict of interest is less likely on the animal-friendly side, I wouldn’t be surprised if we were disproportionately more likely to be accused of such conflict than those on the meat-eating side.

krell_154
krell_154
6 years ago

i think this would be an awful idea. Claims in philosophical works should be assessed on the basis of arguments presented in their favor, and not the identity of the author. The whole broader notion of somehow involving one’s personality with their academic work threatens to turn that academic work into activism.