“Most scholarship is… not going to live forever. Is it therefore not worth doing?”
Writer B.D. McClay was prompted to ask the question in the above headline by remarks from Jason Stanley (Yale), who on Twitter said, “I would regard myself as an abject failure if people are still not reading my philosophical work in 200 years. I have zero intention of being just another Ivy League professor whose work lasts as long as they are alive.”
Stanley is not the only philosopher who has as an aim and standard for their work that it have an influence well into the distant future. (Stanley might have talked about his work being read in 200 years, but he probably didn’t mean just 200 years—presumably he’d be upset if his work lasted 200 years but then was completely forgotten a day after that.) I recall one established philosopher telling a group of graduate students at a workshop, “I am not writing for today; I am writing for posterity,” and others in various conversations over the years taking as their goal to have their writings talked about through the ages.The desire to leave a mark on posterity, to seek immortality through one’s works, can be a powerful one. Its danger is the theme of fiction from ancient myths up through today’s literature. Here’s a character summing it up in a rant to his son in Steve Toltz’s marvelous 2008 novel, A Fraction of the Whole:
“Humans are unique in this world in that, as opposed to all other animals, they have developed a consciousness so advanced that it has one awful byproduct: they are the only creatures aware of their own mortality. This truth is so terrifying that from a very early age humans bury it deep in their unconscious, and this has turned people into red-blooded machines, fleshy factories that manufacture meaning. The meaning they feel becomes channeled into their immortality projects—such as their children, or their gods, or their artistic works, of their businesses, or their nations—that they believe will outlive them… The irony of their immortality projects is that while they have been designed by the unconscious to fool the person into a sense of specialness and into a bid for everlasting life, the manner in which they fret about their immortality projects is the very thing that kills them… This is my warning to you… So what do you think?”
“I have no idea what you just said.”
One needn’t understand the lethality of the desire for immortality literally— it may just be that pursuing its satisfaction may involve sacrificing the actual goods of your life for the merely possible goods of your work’s afterlife.
Whether, as the storytellers insist, the quest for immortality is congenitally ironic, there remains the question: is “being important in the distant future” a standard to which we should hold our work, our projects, or ourselves?
I don’t think it is. On this, I find myself largely in agreement with Brooke Alan Trisel, who takes up the matter in the thoughtful “Human Extinction and the Value of Our Efforts,” (2004). Trisel writes:
The problem in allowing an unrealizable desire, such as immortality, to become part of a standard for judging whether our efforts are worthwhile or important is that it predetermines that we will fail to achieve the standard. Furthermore, it can lead us to lose sight of or discount all of the other things that matter to us besides fulfilling this one desire.
Since there is no way to satisfy the desire for quasi-immortality, one may fall into a state of despair, as did Tolstoy. Furthermore, because the desire may be concealed in the standard, the person may be unable to pinpoint the source of the despair and, consequently, may be unable to figure out how to overcome it. The person may believe that he or she has a new perspective on life that suddenly revealed that human endeavors are and have always been futile, when, in fact, the only thing that changed was that this person increased the standard that he or she had previously used to judge significance. Therefore, it is crucial to recognize when an unrealizable desire, such as the desire to have our works appreciated forever, has infected our standards and, when it has done so, to purge it from these standards. The original standard that we used to judge significance was likely realistic and inspiring before it became corrupted with the desire to achieve quasi-immortality.
Suppose that there is a god who created humanity and who told us that our efforts would be “significant” only if we create works that will last forever. Suppose also that humanity will not last forever and that we live in a universe that will not likely last forever. Thus, there is a clear, “objective” standard for judging whether our efforts are significant. If this were the standard handed down to us by this god, would we try to achieve the standard, or would we reject, as I believe, the standard on grounds that it is unreasonable, assuming that we were not compelled by this god to try to achieve the standard? Ironically, we are free to choose a reasonable standard to judge what is significant, yet some people unwittingly adopt, or impose upon themselves, a standard that they would reject if it had been imposed upon them by an external entity.
Though Trisel writes about the impossibility of immortality, the points are almost as compelling when read as being about the unlikelihood of being thought important in the distant future. I recommend the whole essay, an ungated version of which is available here.
Instead of a direct longing for immortality (or distant impact), one might think that the standard we should hold our writing to is not that it be read through the ages, but rather that it have some other qualities that, as it turns out, make it more likely to be read through the ages. One might hope that one’s work is wise, for instance, and think that if it is wise, it will be discussed for generations to come. If that’s the case, it seems it would be better to focus on and articulate which qualities we have in mind, rather than one’s impact on posterity. One reason for this is that there are less desirable qualities that might contribute to a work’s longevity, such as it being maddeningly unclear, or especially evil. Another is that we may wish to avoid holding ourselves to standards the meeting of which is largely out of our control; I can’t dictate what future generations do, but maybe I can make what I do good in some way—and isn’t that enough?
There are a constellation of issues here about which I’m sure there’s a variety of opinion. Discussion welcome.
(Note: this is not a discussion about Jason Stanley or his work. Comments about him will be deleted.)
I think there’s something a bit like a regress problem for the view that something is only valuable if it lasts forever (this goes both for worries about literal mortality, and about the mortality of our work). Basically, if lasting n days wouldn’t be valuable, and lasting n + 1 days wouldn’t be valuable, and lasting n + 2 days wouldn’t be valuable…isn’t it mysterious if value appears for the first time when n is infinite? What grounds all that value?
In general, we like to think that properties of (countable) infinities should match properties of the limits of finite n as n approaches infinity. So the value of lasting forever should be something like the value on the first day, plus the value on the second, plus the value on the third, etc. But then lasting forever can only be valuable if lasting finite amounts of time is valuable too. That’s certainly too simple, but I still think it gives the flavor of how the view that something is only valuable if it lasts forever is really pretty weird.
Whether it’s intellectual work, or just life, I think the natural view is that at least a good deal of the value associated with them involves stuff that can be realized over a short interval of time. If somebody reads something you wrote and achieves some genuine insight or understanding as a result, that’s valuable, even if that only happens finitely many times. Likewise for other experiences–e.g., playing with your kids–that can only happen finitely many times in a finite life.Report
One of the ways in which Camus characterizes the absurd is the mismatch between the foolish desire for eternity and the fact that we are not. It is like a mad man running to an army of tanks with a kitchen knife in his handReport
Beauvoir’s first philosophical essay Pyrrhus and Cineas (1944) is quite amazing on this question. She argues that if we want to have meaningful projects and lives, we must have free others facing us, to take up our projects as points of departure for their own projects. But this is always a wildly risky endeavor, and true failure is possible. She writes, “I build a house for the men of tomorrow; perhaps they will shelter themselves there, but it could also get in the way of their future constructions. Maybe they will put up with it; maybe they will demolish it; maybe they will live in it, and it will collapse upon them.” (PC, 109).
She captures our situation as utterly vulnerable to the meaning others make of us (this is the heart of my own work on existential vulnerability). Even this kind of meaning or success, when achieved, is fragile; the conclusion of the essay includes this image: “Our freedoms support each other like the stones in an arch, but in an arch that no pillars support.” Beauvoir describes our deep need for others and then uses it as the basis for our ethical responsibilities: we have to work for others’ freedoms to ensure people are there to take us up and “conserve” us, as she puts it, or “clothe” us with meaning. She also tells us that the projects we undertake in part determine the audiences we need. Some of our projects will be successful if they reach out only to our domestic spheres, some call for a particular professional public, some appeal to future generations.
Perhaps the more ambitious the project, the more likely failure will come (200 years is quite a stretch for most of us). But Beauvoir emphasizes, we must act first and foremost. She puts it: “In order for men to be able to give me a place in the world, I must first make a world spring up around me where men have their place; I must love, want, and do.” Crucially, she emphasizes that the success of my project depends on me being able to recognize others’ freedom and the importance of their judgments.
And of course, Beauvoir leaves it up to us whether we choose to make the potential unknown people of the future our criteria for success; or whether we decide the real companions of here and now are valuable peers. She even contemplates the difficulties of others taking us up in ways that are contrary to our aims—a concern that might haunt us as much as or more than the hope of future acclaim.Report
I think that I do my best work, such as it is, when I find a question that I’m interested in and that I think I have something to say about. I don’t think that worrying about whether what I do will seem it important down through the ages would be conducive to me making the greatest contribution that I’m able to make. Sometimes I might have a couple of potential projects where it makes sense to choose between them based on which would be of interest to the most other people. But if I were aiming at being read centuries from now I’d probably try to take on topics that I’m not really well equipped to handle and write a lot of nonsense about them. Does that mean that the work I produce isn’t really worth writing? A strong case could be made. But while it’s corny, I think of images like a slow dripping of water on a rock that eventually dramatically splits. Each drop of water may not have contributed much, but that doesn’t mean that collectively they weren’t important in preparing the way for something significant.Report
So two points: 1. It’s interesting how anyone who follows this standard simply won’t do certain kinds of philosophy. A little reflection shows that it is incredibly unlikely that works in applied ethics or any branch of the philosophy of science will survive this standard no matter how good since the day to day practical problems of people in 2222 almost certainly won’t be ours. And I very much doubt that any branch of science will look like our science or that our philosophy of science will be of much interest to them. (Who in the world reads Kant’s “Metaphysical Foundations of Natural Science” or Aristotle’s “Meteorology” after all? I’m gonna guess plenty of scholars of those guys haven’t). It’s also unlikely, though a teeny bit less so, that works in political philosophy will meet this standard. But of course that’s not to say those works don’t have value. They might even have more value than something people are still reading in 200 years. After all, we’re still reading Aristotle’s defense of natural slavery more than 2000 years later but that hardly proves its value. 2. As an experiment I started looking at old issues of Phil Review going back in ten year increments and seeing if I was familiar with the articles and names starting at 50 years. 1972 holds up stunningly really well in this regard; it has justly famous essays by Foot and Adams. 1962 is less solid and by the time you hit 52 I recognize only one name (Rescher), 42 none, and 32 one (Hook). It’s also worth noting that in 1932 Phil Review seems to be all Hegel all the time, which is a powerful testament against the 200 year standard since it shows how much the philosophical agenda is likely to change over time. Moreover, I’ve published on Hegel, and I only recognized Hook’s name. It’s not just that Hegel is marginal in 2022 while he was central in 1932 but that 1932 work on Hegel hasn’t even held up enough to be read by people working on him. Looking through the entire year of 1922 I saw only two names that I recognized Lovejoy and Dewey. Of those two only Dewey is still seriously read by philosophers and he isn’t central to our canon (which I think is unfortunate but that’s an issue for another time). I couldn’t find a single name I was familiar with in 1923 much less anyone I’d actually read. All of which is to say that if you have the Stanley standard, then the past strongly suggests that only a good bit of self-deception keeps you in the philosophy game. You might be read in 200 years, but then again I might win the lottery. I still max out my retirement match and pay off my credit card every month though.Report
Your suspicions about the drift of the discipline in the middle of the 20th century, exemplified in the topical shift that Phil Review underwent at that time, are supported by Joel Katzav and Krist Vaesen’s “On the Emergence of American Analytic Philosophy”. From the abstract:
Katzav later broadened the scope to include JPhil and Mind, focusing on the years 1925-1969. A perhaps sobering assessment for those with “zero intention of being just another Ivy League professor whose work lasts as long as they are alive,” although it may be that Ivy League professors are less bothered by the way these things work.
Joel Katzav, “Analytic philosophy, 1925–69: emergence, management and nature” British Journal for the History of Philosophy 26 (6):1197-1221 (2018)
Joel Katzav & Krist Vaesen, “On the emergence of American analytic philosophy” British Journal for the History of Philosophy 25 (4):772-798 (2017)Report
Thanks for these! I remembered Justin mentioning them I think but I’d forgotten who wrote them. I wonder how much professionalization and the desire for a method were also causal roles though? Louis Menand makes a good case in his “The Free World” that professionalization in the academy was bad for pluralism in a lot of other fields like literary criticism since part of professionalization is having a method. He doesn’t focus too much on analytic philosophy but the story he tells of what happened in other fields seems like it’s also a very plausible account for how philosophy got where it was in the late 60s and early 70s.Report
Hi Sam. I’m inclined to think professionalism and method are in play, and for the reasons you indicate (on behalf of Menand). But I’m not sure it’s very important, and part of the issue is deciding just what counts as professional philosophy and its method. It’s here that the identification of things like citation networks and journal capture can do some work. Does, say, W.V.O. Quine’s “Two Dogmas of Empiricism” represent a more professional or methodologically sound piece of writing than E.G. Spaulding’s “The Realistic Aspects of Royce’s Logic”, two essays that appeared in Phil Review 35 years apart? After all, they each are written in the interest of addressing the perceived shortcomings of a then-dominant school in Anglo-American philosophy: Quine by way of criticism of the commitments he thinks motivate much of the post-logical-positivist work in the philosophies of language and mind, and the latter by defending a version of logical pluralism on the basis of a critique of Royce’s logical monism.
These are of course only two essays, chosen almost at random, but given the shape of American philosophy in the 1930s and ’40s, I suspect that the methodological concerns that accompany the drive for professionalism don’t play much role in explaining the phenomena that Katzav and Vaesen document. But that’s just a suspicion, and I’m open to learning otherwise.Report
One clarification: Menand doesn’t seem to mean a sound method but just that one has a consistent method. I certainly don’t mean a sound method. He explains a lot of the attraction of structuralism in that it represented an explicit method. He doesn’t seem to think structuralism was a sound method in any sense of that term. I certainly don’t mean a sound method and I’m not sure that even a sound method necessarily speaks in favor of philosophical work. Gilbert Ryle is to my mind a third rate philosopher at best; his only good idea– the knowing how/knowing that distinction– is pretty clearly a bowdlerization of ideas he cribbed from Heidegger. And yet Ryle has a method in a way that far superior philosophers like William James or Heidegger don’t.Report
Hi Sam, thanks for this helpful reply. It strikes at the importance of settling just what counts as professionalism and method in philosophy. I’ll try to riff a bit on it. For what it’s worth, I’m not convinced that method or professionalism can do much to explain why philosophers like Heidegger and James fare worse than Ryle when it comes to who is seen as part of the canon in most departments today, and I think that citation networks and journal capture still play an outsized role in the way Anglo-American philosophy developed in the 20th century (I’m of the mind that Carnap is a different story, and a case could be made — and has been; see Soames’ two-volume work — that developments in modal logic in particular are important for the sociology of 20th century Anglo-American philosophy). Let me explain.
Setting aside his work in psychology, James is a tricky case, as his philosophical writings are more playful and open than Heidegger’s. But Heidegger has a method, it seems to me, even if it isn’t one that philosophers of a Rylean bent are liable to easily employ. And John McDermott used to say that the pragmatists and the existentialists were talking about the same thing, but using different language: for the pragmatists, one tried to make sure one wasn’t just existing, but also living; for the existentialists, one tried to make sure one wasn’t just living, but also existing.
Perhaps living and existing aren’t the kinds of issues early analytic philosophers tended to dwell on, and it’s true that the methods of existentialism and pragmatism get a bad rap in some quarters, for it’s not always easy to cut through the jargon. Take Dewey, for instance. Rorty remarks that he uses the word “experience” as an incantatory device to blur all distinctions, and Oliver Wendell Holmes, Jr. said that Dewey wrote as “God would have spoken had He been inarticulate but keenly desirous to tell you how it was”. And yet, here’s Dewey in Experience and Nature, published 31 years before Wilfrid Sellars’ Empiricism and the Philosophy of Mind, explicitly endorsing two key theses of the latter work: psychological nominalism (the idea that language use is a condition on the exercise of discursive cognition), and a rejection of mythical givenness (roughly, the idea that sensory states can function as an epistemic foundation for knowledge):
Owing to the influence of Sellars, psychological nominalism and the rejection of mythical givennes would become points of orientation for many philosophers working from the middle of the 20th century. It’s a shame there isn’t more awareness of the rich pluralistic background that informs the development of what came to be called “analytic philosophy”. Perhaps what we need are philosophers who are more interested in familiarizing themselves with the last two-hundred years’ worth of work than in worrying over whether they will have readers two hundred years in the future.Report
Surely the reason people want to read our work, and not just that they do want to, matters a great deal. People probably will be reading Mein Kampf 200 years after it was written. The idea, then, must be that one hopes that one’s work will in some way be valued by future people. But Mein Kampf will be valued by historians and others interested in various darker dimensions of the human condition, and presumably this is not what Hitler hoped for, either. It seems to me that the desire to be read well into the future, then, must ultimately reduce to the desire to be well thought of by future generations. Personally, I’d need to meet them first, before concluding whether I want them to like me or my work.Report
Upon a bit of reflection, I think maybe a more defensible desire would be the desire that one’s work make the world (whether in terms of the development of philosophy or in general) in 200 years better, irrespective of whether the work is read, or whether one is remembered. The latter don’t seem important at all, though oftentimes the two (being read and remembered, on the one hand, and having a positive impact, on the other) are conjoined, for obvious but morally unimportant reasons.Report
Yes, this was my thought on seeing the Stanley quote. If I can write something that shapes what people 10 years from now are writing, in ways that shape what people 10 years from then are writing, and so on, then I can have quite a large impact 200 years from now, even if no one is aware that I am the one having the impact.Report
This was my thought as well.Report
Thank you for this discussion, Justin. I agree with your general argument but want to add another angle. I find it very problematic if philosophers measure the success of their work by their posthumous impact. This wish expresses, in my opinion, the drive to dominant the philosophical field as much as possible. Stanley’s entire statement sounds very Nietzschean to me: a combination of will to power and self-absorbed hyperbole (that makes many teenagers loving Nietzsche). I believe that such attitudes have devastating consequences on our profession. It sets a much too high standard for what is seen as valuable work and, hence, contributes to the crazy competitiveness of our field. We should work together, even if this means that *my* contribution to the field is not as visible as it should be or vanishes completely. In some sense, Stanley’s statement summarizes the elitist Ivy League mindset very well that makes our field so narrowly focused on certain kinds of contributions by a few philosophers but cripples the creativity of many others (as Helen de Cruz shows convincingly on Philosophers’ Cocoon). Philosophy should be a collaborative effort and not an egotistic battle for influence. This is what Twitter is for and Elon Musk is the perfect incarnation of such an attitude.Report
I also struggle to see how this, very common, elitist obsession with hierarchy and one’s place in it fits with any sort of political and moral view that isn’t hard right much less with the leftier than thou public politics of many of those who hold such values. And I certainly don’t have only Stanley in mind here or even primarily him.Report
There are different kinds of hierarchies. It doesn’t follow that because someone denounces one kind (economic) that they should denounce another (reputational).Report
Hear, hear! It is also a delusion to think that you can have control over what future generations will value. I don’t see how you could guarantee living on philosophically. By having very high standards and/or working extra hard? But aren’t we all doing that? Without original ideas (possibly combined with an engaging writing style) such an ambition is futile – even then, many will be forgotten.Report
Few of us make individual contributions that will be remembered in 200 years. All of us can make contributions to collective projects that will matter in 200 years.
We can all do things to ensure that our planet stays physically habitable for human beings. We can all do things to promote liberty and democratic government and to prevent the spread of authoritarianism. As philosophers, we can all do things to promote a vibrant future for our intellectual discipline.Report
To work towards contributing to a goal far beyond one’s own lifetime seems to me to be a very worthwhile aspiration as well as motivation. But that goal need not be made dependent on people still reading our own works in some distant future. I would like to suggest what strikes at least myself as a very satisfying option: that we should view our own efforts as akin to adding one stone to an edifice that constitutes the cooperative effort of countless individuals and generations.Report
Most humans are not going to live forever, nor will they even be remembered for more than a few generations after their death. Is it (life) therefore not worth doing?
This is essentially the same question.Report
Future generations will call this “Lin’s Obscurity Argument for Antinatalism.”Report
Yesss. My legacy is now secured for the next 200 years.Report
No Patrick! Skynet will use your argument as justification for the first, third, and eigth purging of biological humans. I’ve just returned from that timeline, delete your post before it’s too late!Report
See, this is exactly why I’m anti-time-travel.Report
Marcus Aurelius (who was born yesterday in 121 A.D., btw) wrote his Meditations as a private diary and not for public consumption. What a loss that would’ve been, if he had been guided by this principle.Report
These kinds of ambitions (or egos) make the discipline (and the world) so crowded and suffocating…Report
Here’s a different view of how one might regard one’s work, courtesy of Jerry Fodor, from the preface to A Theory of Content and Other Essays: “…I don’t write for posterity, I don’t feel bad about changing my mind in public. Posterity, no doubt, will have problems of its own; I am glad to settle for a slightly better story to tell than the one I had last week.”Report
A group of students once had the great fortune of Kurt Vonnegut visiting their high school. He gave them the following assignment:
Here’s an assignment for tonight, and I hope Ms. Lockwood will flunk you if you don’t do it: Write a six line poem, about anything, but rhymed. No fair tennis without a net. Make it as good as you possibly can. But don’t tell anybody what you’re doing. Don’t show it or recite it to anybody, not even your girlfriend or parents or whatever, or Ms. Lockwood. OK?
Tear it up into teeny-weeny pieces, and discard them into widely separated trash recepticals [sic]. You will find that you have already been gloriously rewarded for your poem. You have experienced becoming, learned a lot more about what’s inside you, and you have made your soul grow.Report
“I would regard myself as an abject failure if people are still not reading my philosophical work in 200 years. I have zero intention of being just another Ivy League professor whose work lasts as long as they are alive.” My first reaction: Wow! Second reaction: Sad. Third reaction: To each their own. Fourth reaction: Good luck!Report
Perhaps there is an argument against presentism and for eternalism/growing block (in the metaphysics of time sense) here: if only present things exist, then only present things can have value, then the only way a work can retain value is if it remains in the ever-moving present (i.e. if it is remembered in posterity). However, this conclusion seems absurd, because (a) nothing will be remembered forever (assume that means death-of-the-sun-forever) and hence any work will eventually have no value and because (b) being remembered surely has a component of randomness, and the value of a work feels like it should not. From this: if we want any work to really have value the past must continue to exits, and the value of by-now-long-forgotten works must continue to exits with it.Report
In 200 years, historians of Early Post-Millennial philosophy, whether human or AI bots, will surely be referencing DailyNous and Justin Weinberg’s invaluable contributions.Report
The nice thing about setting the criterion upon which your entire life’s work will be judged long after you are dead is that you get to live your life thinking that you achieved it and whether or not you ever do will be literally nothing to you.
Since we’re giving hot takes on Stanley’s self-induced twitter ratio-ing, I think that we should all ask ourselves whether or not we would think that our lives would somehow be better if our ideas survive for hundreds of years in an appreciable (as opposed to execrable) way. Speaking for myself, I think my life is better off if people read and appreciate my work in 200 years. That really would be nifty. If I weren’t already dead, I would think that that was awesome and I would also think that all of my philosophy friends would probably think the same. You don’t need to think, or even aim, to be the next Kant or Rawls to think that it would be a good thing for your work, and you by extension, were still being positively talked about, engaged with, disagreed with, etc., long past your death.
Having said that, I also understand those of you who find something sad, tragic, or even pathetic about a life whose purpose is structured *around* that goal. There are so many goods in life that a life structured around a good that’s impossible to ever personally experience seems like folly (what Epicurus might call a “vain and empty” desire). There’s something to this too. We may even be better at achieving that end if we don’t aim for it.
And having said THAT, I still find it funny when someone critiquing Stanley quotes a famous philosopher as a part of that critique. There’s a situational irony there that’s hopefully not lost.Report
The aim for one’s work to last for a long time is not a bad one.
Few of us want to be so forgettable that we are forgotten once we leave the room. If the desire to be remembered when out of sight is legitimate, then so is the desire to be remembered when not alive. And if it is good to be remembered past one’s lifetimeb for X years, it is better to be remembered for X+1 years.
So, the desire to be remembered forever is legitimate, but perhaps it’s not the be all and end all of one’s work. But, perhaps a related aim comes close: the aim to produce work that is worthy of being remembered forever.
One thing that makes a piece of original research worthy of being remembered forever is that it gets things right and does so in a way that can be demonstrated to others. But, that, as philosophers, is what we should be aiming at anyway.Report
It occurs to me that this is really a powerfully odd thing for Stanley to have said and I wonder if he’d endorse it on reflection. Consider this argument that I offer tongue only barely in cheek:
If Stanley’s recent work on facism and propaganda is still read in 200 years that would mean that 2222 faces not just problems with authoritarian governments and misinformation but exactly the problems we do with those things. But that would mean that the works failed entirely as interventions in the public debate since they had no effect. So given their intentions one would have to conclude they’ve failed. But if they’re not being read in 200 years then by the Stanley standard they are failures.
My serious point here is this: There seem to be a lot of things one might want to do with philosophical writing and many of them seem worthwhile. One of those things is to change a current policy or a set of circumstances one finds unjust or otherwise regrettable. But if one succeeds in doing that the work itself may not be read because the circumstances have changed such that it’s no longer a problem. For instance, I’ve got a paper coming out soon on how the dominant method of interrogation used by American law enforcement, the Reid-Inbau method is immoral because of the fact the entire method requires police to lie to suspects and witnesses. I really hope no one will find that paper the least bit interesting 30 or even 20 years from now because I hope police won’t be using the Reid-Inbau method any more. Don’t get me wrong I’m not delusional enough to think my paper will end it, but if I didn’t want it to end and I didn’t think the paper may play some small role in that I wouldn’t have wasted my time.Report
It could still be of historical or sociological interest!Report
Why? We still read Plato’s thoughts about the different ways of organising states, irrespective of how they fared as ‘interventions in the public debate’, because we find much of interest on the way.
I have no idea what line you take on the Reid-Imbau method (of which I’ve never heard, and which I’m intrigued by and am now going look up) but presumably considerations of truth-telling vs lying, and the ethics of doing bad things to achieve positive outcomes will still be relevant to people in 200 years time. If there are any such people.Report
If you’re still looking for stuff on the Reid method this isn’t a bad place to start. https://www.newyorker.com/magazine/2013/12/09/the-interview-7
For the record the Reid method is problematic for soooo many reasons. The biggest one being that it induces false confessions as this New Yorker article demonstrates. I take a stronger line though and say that the mere fact that it requires police to lie to the citizens they are supposed to serve makes it wrong over and above the very real worries about false confessions and convictions.Report
I’m reminded of the quote from Woody Allen: “I don’t want to achieve immortality through my work, I want to achieve immortality by not dying.”Report
Speaking of Woody Allen, I was reminded of a scene from Annie Hall.
Alvy (as a child): The universe is expanding.
Doctor: The universe is expanding?
Alvy: Well, the universe is everything, and if it’s expanding, someday it will break apart and that would be the end of everything!
Mother: What is that your business? He stopped doing his homework.
Alvy: What’s the point?Report
So my best bet is to write something good enough or outlandish enough to be cited by great philosopher X, literally yet another footnote to Plato.Report
What I find puzzling is the phrase, “my philosophical work”. Given determinism, in what sense can I call any philosophical work “mine”? It is not as though any idea truly originates in me out of nothing by a kind of mystical genius — my thoughts and ideas and philosophical work are the result of various influences outside of me, philosophical and social and random, mixing together and being sorted out by a particular brain. I call some work “my own” in virtue of it having passed through my brain as a kind of intermediary, perhaps. The same ideas (good and bad) reoccur over and over again, our own “unique” insights are can be found expressed by those who lived long before us or who have never read or cite us (and who we never read or bothered to cite), just in slightly different terms. So, what right have I got to claim ownership over this work, to hold it to be original, or to think of it as a vehicle for my own immortality?
That said, I don’t find it puzzling to hope that my work will survive in 200 years. I hope that some true things survive in 200 years, and I hope that some of what I write is true, and consequently I hope that some of what I write will survive in 200 years. I presume this is what Jason Stanley meant also — an Ivy League professor naturally has to worry their work is read and respected only because of their present status, not because of its enduring value or truth, and they’d prefer to know they have written something that is true of enduring value. I respect Stanley’s desire for that reason. But this has nothing to do with the work being one person’s as opposed to another’s. I would not want the false things I write to survive any more than I’d fail to want the true things that others have written to survive.
I assume what most people are pursuing in philosophical work is to try to get a few of their thoughts right without getting too many of their thoughts wrong. The underlying motivation is largely hedonistic, simply for the delicious pleasure of gaining understanding. Some of the motivation is altruistic, in that satisfaction of sharing a bit of understanding through teaching or publishing, the way in which one might might be motivated to share a restaurant review on Yelp. There is also some perhaps some reciprocal obligation to teach or write a bit, to give to the future what one has gained by reading the past. But the judgment of the future and our rational interest in it is purely to the extent it is taken as recognition of having produced something of value.
I suspect philosophy as a discipline would be greatly improved by requiring all publications to be anonymous. Reviewers would then waste less time reading something composed only to bolster a C.V. and more time could be spent collaboratively improving the quality of work itself — which might make it endure longer.Report
It’s possible, isn’t it, that, two hundred years from now, no one among us will be remembered for any of our philosophical efforts? (And yes, I do mean ‘no one’, including the most currently celebrated and professionally rewarded, difficult though that might be for some to imagine.) Is that a depressing thought, even for some? I hope not. Is it a humbling thought, at least for some? I hope so. Is it a thought intended to be critical of our individual or collective efforts as philosophers? Definitely not.Report
It’s possible but I’d find it hard to believe that no currently active philosopher is read / remembered in 200 years. Is there any decade or generation in the last thousand years where we can say “we don’t know if any philosophers were writing at this time?”
But then again we probably don’t have a great idea what things will be like in 200 years. The whole place could be up in smoke by then.Report
Depends on where you look.Report
For me, there’s nothing intrinsically wrong with being driven by a goal that is unlikely to be achieved. Consider a backgammon player in a board state where she is unlikely to win even given optimal play (let’s ignore the doubling die rule, and assume that the game gets played out). Her “job” is to play optimally, maximising her slim chance of victory.
Note that the quality of her play will be compromised if she succumbs to despair. But she will also suffer more generally if she foolishly believes herself likely to win (because she will lose side bets on the outcome of the game). Thus the ideal player will somehow hold true to the driving aim of winning, and at the same time acknowledge its improbability in a way that does not discourage her from optimal play.
Such an attitude is psychologically difficult for humans (for good reasons, I think, relating to the need to allocate resources efficiently). Stanley’s quip seems problematic to me at the conative level mostly because the phrase “abject failure” suggests a propensity to despair if the desired outcome is judged unlikely (which I think it should be!), rather than because of his expressing an ambition that is probably unattainable. (I do agree with others that, at the content level, the expressed ambition is a little on the narcissistic side).Report
I want my work to be the reason there isn’t a 200 years from now and so far things are going according to plan.Report
PS: It’d be cool if my work is still read 300 years from now, but let’s be clear: That won’t help me buy more guitars and tube amps *now*. So it’s not what’s most important.Report
History repeats itself. One generation may reject certain values. The next generation will resurrect them. Death and finality will claim all titles, your family, and then you. Temporality is the basis of life and thought, but we don’t have to resurrect Plato or Buddha or Rama. We can strive for what they sought and be free from the angst of dying and finality.Report
To philosophize is to learn to die. To publish, on the other hand…Report
I do not disagree wholly with what Weinberg says on the matter of caring for the prosperity of one’s work, but I would like to expand on some ideas and bring in others. It may be true that pursuing the satisfaction of one’s work “may involve sacrificing the actual goods of your life for the merely possible goods of your work’s afterlife,” but it may also be true that the pursuit of “greatness” is among the actual goods of one’s life.
“There remains the question: is ‘being important in the distant future’ a standard to which we should hold our work, our projects, or ourselves?” I do not believe that this is a question by which one needs to worry themselves because the standard is allusive. Now this aim is “an unrealizable desire” or it is not. We know that ‘some’ works are considered by many to be “great” and are read well after 200 years. So if the goal of, say, Homer, was to create works that would last for countless years, then it was certainly realized, but not by Homer.
Now this does not mean that it is an unworthy aim, although it is not a goal that can be realized ‘by the author.’ So how, if it does constitute one, escape such a dilemma? I believe that if one does wish to create something “great,” the aim should be to create work that they themselves believe to be great. This of course may be a pursuit that cannot be realized for those who are ‘never’ satisfied with their work, but perhaps that is precisely the guide by which one creates “greatness” in their time or for future generations?
But greatness is not necessarily aimed for, it can come by accident – one might stumble upon it, perhaps never realizing it. So, in this way, I agree with Weinberg, that it is “better to focus on and articulate which qualities we have in mind,” and I would like to emphasize the ‘we,’ because that is what matters most, what we think about the quality of our own work. Report
If part of the view being expressed in the “200 years” remark is that we should write on topics of enduring importance rather than whatever the trendy yet trivial topics of the day happen to be, then I’d agree with that aspect of it.
However, the idea that anyone would regard themselves as an “abject failure” if their work is not being read in 200 years is both laughable and sad to me. Such comments often do little more than reveal an absurdly over-inflated ego: so few people will produce work that will be read in 200 years that the idea that you are not only likely to be one of them but that you will be an “abject failure” if you are not is just ridiculously arrogant. This kind of puffery should be roundly mocked in satires of academia, not taken seriously as a standard for evaluating the worth of one’s work.
When people make these sorts of remarks, they remind me of Casaubon’s arrogance in Middlemarch. We might instead take a different lesson from that book: that “the growing good of the world is partly dependent on unhistoric acts” performed by those who now “rest in unvisited tombs.”
For my part, I am happy to perform unhistoric acts, like caring for my students and helping them see the joy of intellectual life. As long as I can do those kinds of things, whether or not my work is read in 200 years—or if I am even remembered at all by those alive at that time—is not important to me. If I let misplaced and grasping ambition for intellectual fame get in the way of that, then I would regard myself as an abject failure.Report
Who reads Newton or Darwin’s works nowadays (except from some historians of science)? Impact has nothing to do with being read.Report
Those things work a bit differently in science than in philosophyReport
I am one who centers their philosophical work around teaching more than scholarship (and my scholarship, such as it is, is about teaching). I know my work as a teacher won’t be known 200 years from now, nor do I care that it will be. I seek to have an effect on the students I teach. To change their view of the world and themselves. To have an effect now, this semester, and, I hope, for years to come. I am elated when they remember me 10 or 15 or 20 years past graduation, so that is a bit of what Stanley is talking about–feeling that the value of my work consists in its longevity. If I think about how the world might be different 200 years from now, perhaps it is in some sort of chain of cause and effect: the students I have inspired will inspire others and so on. I guess I hope, vainly, that the world will be different 200 years from now because I taught a couple thousand students during my teaching career, but I don’t really care if people remember me by name. Indeed, like Marcus and others Stoics have said, the promise of being forgotten by time takes a lot of the burden off our present work. And maybe this “butterfly effect” point works for scholarship, too. Even if Stanley isn’t read in 200 years, maybe a more realistic goal is for Stanley and the rest of us to have influenced, directly or indirectly, what the future philosophers are writing in the 23rd century.Report
It seems that those overly focused on or obsessed with the survival of their work (or, indeed, themselves) haven’t yet understood or embraced two key insights prevalent in Eastern thought (and elsewhere):
(a) अणिच्च (Anicca) or the impermanence of all things
(b) You have the right to your actions but not to the fruits of your actions. And attachment to presumed or imagined consequences is often a cause of and recipe for suffering (in this instance a sense of “abject failure”).
It’s pretty clear that an inflated ego and disproportional sense of self-importance is at work in the case that generated this discussion.Report
Intrinsic values last forever because they are things that are valuable in themselves…. If our work isn’t intrinsically valuable, then the extrinsic value will only last as long as people find our work useful. I think most of our work is fully in the latter camp, and I’m okay with that.
I don’t see a need for long-lasting meaning of my own existence. Meaning in the existential sense, I think only exists in the same way that features of the quantum exist, to the best of our current knowledge. They are true in the microscopic, and is difficult to explain how they have larger significance. That doesn’t meant that the quantum is insignificant. Perhaps it gets “smoothed” or “averaged” out in the macroscopic, so it disappears.
If someone reads THIS comment and finds it enlightening or interesting, then my work has influence and has meaning. That might in turn affect them in some way…. Maybe it spurs them to publish an article, or influences an idea that they have on some other topic. Does that matter when the heat death of the universe arrives? Of course not, but that doesn’t mean that it lacks meaning. And that is enough for me.Report
What is success? You can be successful at many things. You can be successful at counting blades of grass. Is that success? It isn’t success.Report
Thank you for such an interesting topic. I study philosophy in early 15th century Prague, and am reminded of the most brilliant philosopher of the period, Stanislav of Znojmo, who not only understood John Wyclif, but added to his logic and metaphysics usefully. He was regarded as the brightest of the bright in his time, and eventually was compelled to renounce his earlier position and attack the thought of the younger generation followers of Jan Hus. He is now remembered almost exclusively for that in the field of late medieval Central European philosophy. The rest of the important philosophers of his day, Stěpan Paleč, ˇ0ndrej z Brod, Adalbert Ranconis, Prokop z Plzn, are barely remembered today. And this was a time when differences about philosophy led to real violence.Report
I’ll add another odd incongruity that I’m sensing from what I’ll just take to be the “dominant” response to Stanley’s quotes about the value of being remembered. The dominant response here appears to me to be that Stanley’s plainly wrong to want to be remembered in 200 years and that it’s a symptom of some kind of narcissistic tendency to that desire. However, how do we square this against the great push in the last 20 years TO REMEMBER figures in the history of philosophy?
Last week, Justin posted “Escaping the Feedback Loop” which contained within it the following argument:
“Why, then, do so few philosophers read Ruth Barcan Marcus? Why do they read Martha Kneale only in the context of her joint book with her husband (Kneale and Kneale, Development of Logic), and know Alice Ambrose and Margaret Macdonald only for their editorial work on Wittgenstein’s writings (Ambrose, Wittgenstein’s Lectures)? Why have most of us never even heard the names of the other women featured in this Special Issue, Christine Ladd-Franklin, Olga Plümacher, Dorothy Wrinch, or Ayda Ignez Arruda?”
These appeals seem to CONFIRM Stanley’s position that being remembered has intrinsic value, that it pays respect or honor to long dead authors to continue to not only read their work but to remember their names EVEN IF THEY DIDN’T HAVE AN IMPACT in the history of the discipline. I’ll admit that I find it hard to square the desire *to remember* in these cases with the strongly negative response to Stanley’s stated desire to *be remembered.* Either there’s intrinsic value to being remembered or there isn’t. My own view is that there is such a value, that one’s life is improved by being remembered positively for one’s work, and that it’s not wrong to have such a desire.
We can definitely argue that the value of being remembered need not (or should not) be the primary (or even an important) factor in one’s own life but I also think that it’s important to think about how to apply the principle of charity to Stanley’s claim especially when we contrast it to arguments for escaping the feedback loop of history, of resurrecting forgotten philosophers, etc etc.Report
I think there is a straightforward way of understanding the drive to recover historical philosophers in terms of present benefits, rather than a kind of kindness to the dead themselves. Some people believe that true beliefs about important things are intrinsically valuable, and so there may be intrinsic value in knowing more about these forgotten figures. Perhaps more saliently, bringing knowledge of the contributions of historical women into come common circulation may improve the gender climate of the discipline as it is currently practiced.
Bracketing that, though, I am not sure that it is so weird to endorse an asymmetry between what people ought to give, and what people ought to want to be given. For instance, in the case of love, I would say that it is an admirable trait to love everyone around you, but that it is not an admirable trait at all to need to be loved by everyone around you. The former is generous of spirit, whereas the latter is needy and vain. There are also commonsense differences attached to exactly how such a desire might be formulated. To hope to be loved, and even to aim to be loved, may be unproblematic whereas to need to be loved is less so. The language of “abject failure” here suggests an unlovely formulation of the desire to be recognized. But, of course, an unpleasant one-off remark doesn’t need to be more than that.Report
The desire to be remembered is one thing. The thought that one is an “abject failure” if one is not remembered is another.Report
It seems kind of depressing for someone’s sense of worth to be dependent upon what people who won’t exist for over a century think of one.Report
Why does my work need to be read in 200 years for me to leave a legacy though? There are a lot of ways to leave a philosophical legacy that has nothing to do with my work being read in the future. If we’re talking about legacy, the only one I can think of that’s really that worthwhile is that you influence people, from students to colleagues, who then go onto influence other people.
Sure, your name is no longer as bound up with your legacy, but it doesn’t really matter. It’s still a legacy even if your name aren’t isn’t being sung to the mountains.Report
Never mind being read (or not reread) in 200 years time) For the vast majority of the philosophical profession , the problem is that their work will go largely unread in then here and now. The following is an edited version of my response to ‘Fabio’s’ article. ‘Confessions of an Ex-Philosopher’, originally posed on Leiter in 2020.
… Fabio may be trying to infect others with his own unhappiness [‘If you are happy, keep doing what you do (are you happy though?)’], but he does suggest at least *one* genuine cause for unhappiness in the life of a professional philosopher, though he does it in a rather nasty way:
I can tell you: most of what you do is supremely useless work. The overwhelming majority of your papers and your book reviews — invariably published in niche journals — will be read maybe by a dozen people … You are not propelling forward the intellectual development of the human species. You’re wasting your life and energies droning away on projects that interest maybe a handful of people worldwide, and that very often have no intrinsic value, if not that of contributing to the reproduction of an abhorrent and rapacious system of academic publishing.
There is a real problem with Philosophy as a profession, suggested though not fully developed by Fabio, namely that it is a winner-takes-all-economy as defined by Frank and Cook’s book The Winner Take-All Society (1995). A fortunate few are attended to but most people’s papers go largely unread. The rewards in terms of fame and attention are distributed in a concave sided pyramid with a very wide shallow base and a narrow lofty steeple. Most papers are hardly read or cited at all whilst a lucky few have tens or even hundreds of citations. This means that the vast majority of professional philosophers will never achieve their heart’s desire which is to make *and to be recognised as making* a significant contribute to the (international) philosophical debate. This is not (in the main) due to the moral failings of famous philosophers who wantonly disregard the young and struggling but to scholastic acts between consenting adults, scholastic acts of which the young and struggling are equally guilty. As I say, what most of us want, in our heart of hearts, is to be, and to be recognized as being, gifted philosophers who make a major contribution to the discipline. But this is an ambition that most of us will never achieve. Why? Well, precisely *because* of this shared ambition (as well as institutional incentives) books and articles pour off the presses at a rate that no single individual can possibly keep up with. The attention that we can pay to others is finite. Ditto the attention that others can pay to us. There just isn’t world enough and time to read more than a fraction of what might be relevant and even good. (And the problem is compounded if you happen to have wide interests.) So when it comes to what we buy with the currency of attention we all have to be viciously selective, especially as reading a philosophy paper, and still more a book, represents a sizeable investment in terms of time and effort. Nobody wants to waste their time on a dud. We are all like the buyer in the airport bookstore who wants a good book to entertain herself on a long-haul flight. She doesn’t not want to be bored on the flight across the Pacific (since that would make a bad thing worse) so she plumps for the latest Kay Scarpetta novel from Patricia Cornwell. And this is not because she thinks that there aren’t better books in the same genre but because she does not have time to do a search and because she knows that Patricia Cornwell is a reliable purveyor of the right kind of literary product. As it is with thrillers, so it is with Philosophy. A few top names get the lion’s share of the attention and citations, whilst a great many people not very much less talented (or maybe even more so), languish in relative obscurity. We tend to read papers by names that we recognize or which have been recommended to us by people that we trust or admire, because then we can be reasonably confident that we won’t be wasting our time. Furthermore there is a tendency to read the paper that everybody is talking about even if you privately think that there are better papers in the offing, simply because everybody is talking about it and you don’t want to sound like an ignoramus.
All this does not mean that you are in direct competition with your colleague down the corridor. On the contrary, her fame can contribute to yours and vice versa if you cite her in your successful paper and she cites you in hers. The competition is rather at the collective level. If somebody somewhere is reading one of your papers then somebody somewhere else is not getting read. When it comes to attention and fame, philosophy at large is a limited-sum game.
Thus it is more or less inevitable that most of us won’t get what we really want.. Philosophy as a profession is an unhappiness machine, unless a) you get lucky and make your way into the spire or b) you give up your heart’s desire. We are collectively being slapped down by the back of the invisible hand.
I would regard myself as an abject failure if I regarded myself as an abject failure because people weren’t reading my work in 200 yearsReport
I can tell that you’re a solid dude with a good head on his shoulders, and the way you phrased your comment was really clever. A lot of people in philosophy could learn from your example, and you should start a substack.Report