Influential Ideas in an AI Era


A philosopher often praised for the accessibility of his writing, when asked about it (he often took part in advice sessions for younger academics), would say that he is not writing for today, but for the future.

What he meant was that rather than clogging his writing with technicalities, keeping it focused on narrow disputes, hiding its ideas behind jargon, burying it with caveats, and bending it around every far-fetched-but-logically-possible objection—things which are normal, and not always objectionable, in the typical academic article or monograph, that is, when one is communicating to one’s contemporary colleagues—he was trying to write about matters that people outside of our particular professional moment would find important, and in a way that they could understand.

The desire or hope that one’s ideas outlast one’s era is not uncommon among academics.

Though he was pilloried for putting it in such extreme terms, when another philosopher said on social media,

I would regard myself as an abject failure if people are still not reading my philosophical work in 200 years. I have zero intention of being just another Ivy League professor whose work lasts as long as they are alive,

he was expressing that common attitude.

[image made with ChatGPT and Photoshop]

So how can one make one’s ideas last?

Carve them into stone, obviously. Stone carvings will survive time and its disasters much better than paper, magnetic tape, plastic discs, hard drives, etc.

If we ask the question in a less cataclysmic register, the answer might be different. It might be something like what Tyler Cowen (George Mason University), writing at Bloomberg (reproduced at Marginal Revolution) says:

If you wish to achieve some kind of intellectual immortality, writing for the AIs is probably your best chance. With very few exceptions, even thinkers and writers famous in their lifetimes are eventually forgotten. But not by the AIs. If you want your grandchildren or great-grandchildren to know what you thought about a topic, the AIs can give them a pretty good idea. After all, the AIs will have digested much of your corpus and built a model of how you think. Your descendants, or maybe future fans, won’t have to page through a lot of dusty old books to get an inkling of your ideas.

It may be optimistic to think that the AI systems of the future will be able to properly credit you for your ideas, or that even if able to, they will bother to in much of their communication. But it’s not (just?) recognition you’re after, right? It’s that your ideas are worth preserving, even if they’re not attached to you.

But how well will AI actually preserve your ideas? How well or how often will it be called upon to explicitly communicate them? That is unclear.

What seems clearer is that popular AI systems are going to “converse” with more people than any one individual ever could. It seems likely that their utterances will, at least in the aggregate, be very influential. So perhaps the best one can hope for is to influence, with your ideas, what the AI systems will say.

That’s what Craig Warmke, a philosopher at Northern Illinois University, said on X.com recently:

Since 2023 or so, one thing that’s motivated my coauthors and me—having influence over LLMs. If you think your writing latches onto reality, why wouldn’t you want the LLMs to be trained on your writing for the greater good? And, still, a significant contingent of academics would prefer frontier models not to be trained on their research because AI companies don’t pay them for it. It’s hard for me to understand the mentality of an academic who would prefer a $100 check over influence in the model weights of widely used LLMs.

One might wonder what determines how influential (if at all) one’s ideas will be on what an AI says. This would seem to be a function of how much other material the LLMs are trained on, what the training and use processes reward the LLM for, the type and extent of “censorship” that corporations and governments build into or impose on their AI systems, and so on. I don’t feel sufficiently informed to even guess about these matters, so it is difficult to estimate what would be one’s degree of influence.

I suppose one reply is that if you don’t allow LLMs to train on your writing, the degree to which LLMs enhance your influence is likely to be lower than if you do. How likely? I have no idea.

Discussion welcome.

guest

40 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Marc Champagne
1 month ago

Compared with talk about “progress,” at least this talk about (what Ernest Becker called) “immortality projects” confesses with candor the REAL drive(s) behind this AI enthusiasm.

We may now see it for what it is: a proxy for religion.

Bilingual
Bilingual
Reply to  Marc Champagne
1 month ago

Complete with its own fanatical acolytes

naive skeptic
naive skeptic
Reply to  Bilingual
1 month ago

Count me as a Misotheist then. I think ai exists, will become increasingly more powerful, and should be opposed.

Bilingual
Bilingual
Reply to  naive skeptic
1 month ago

Yes. Let’s tear down the idols.

Ibrahim Malik
1 month ago

I would hope that the future’s thinkers will not be so influenced by LLM prompts. Rather than immortalizing my ideas, I think LLMs would bastardize them.

Then again, when we recreate the music of ancient cultures, despite us having no access to the exact instruments, lingual phonetics or notes of the time, one could argue that those cultures continue to exert their influence. But the process of (re)discovering an ancient civilisation, decoding its language, unearthing its artifacts and wishing to reimagine their music is phenomenally different from an algorithm’s eating of your ideas; ideas understood not even as a text, but as statistical weights.

Bilingual
Bilingual
1 month ago

Immortality granted by AI is, as the French say, a leurre. If my options are really limited to (a) languishing in obscurity or (b) having my intellectual output hijacked by some soulless, corporate-manufactured simulacrum of the human mind, then I’ll proudly choose the life of the lens-grinder. With every last particle of my power, I will strive to prevent the tech billionaire class from using my thoughts to power their planet-poisoning toys. Apologies if I sound melodramatic, but I’ve become quite sick of being bludgeoned over the head by AI apologia from people in this profession who seem to think it a foregone, near fatalistic conclusion that AI is destined to wrap its tendrils around every aspect of academic life. It is not. We can refuse that eventuality, if we decide to care enough.

Last edited 1 month ago by Bilingual
Ethan Brown
Ethan Brown
Reply to  Bilingual
1 month ago

Even this article’s framing as “the AI era” is part of the culture of fatalism.

AGT
AGT
1 month ago

I don’t want my ideas to last; I just want to write them down.

Andrew Richmond
Andrew Richmond
1 month ago

The amount of influence you have on LLM outputs is going to be miniscule unless users ask about you specifically or about a field in which you’re already extremely well-known. If I want to compete with Tyler Cowen for influence on the future of economic thought, I’m not going to succeed by just putting my my text in an LLM’s training data alongside his far more cited and widely-discussed text. He’s influenced LLM outputs because he’s influenced a massive number of *people,* who cite him, talk about him, debate his ideas, etc. That far larger body of work — not the meager bit of text he himself has added to the LLM’s training data, or anything he’s done to write “for the AIs” — is the reason that he and his work influence model outputs.

I’m not taking a stand on the $100 paycheck, but influence over LLM outputs seems to be one of those things you achieve by aiming for something else: to convince humans that your ideas are correct and important, since it’s not your writing but the combined writing of those other humans and the large-scale reception of your work that could, maybe, make you and your ideas salient to an LLM.

Benedict
Benedict
1 month ago

I’m noticing a trend on DailyNous of articles aimed at legitimizing the use of LLMs. I’m fairly neutral on LLMs themselves, but in this case surely the argument is not convincing and amounts to nothing more than (even more) unfounded hype.
LLMs are only a few years old; they are a new technology whose ultimate social role and utility is still very much undecided. They have not proven their value for scholarship yet (that they have is the implicit, question-begging assumption behind the post), and so it is not at all clear that making your work accessible to LLMs is actually a reliable strategy for having it survive long-term. (Setting aside, as other commenters point out, how valuable it is to aim at such long-term influence in the first place.)
Further, given LLMs’ noted track record of distorting ideas, let alone outright plagiarizing them, there are other reasons to worry about their usefulness in making one’s work last.

A. Hilbert
A. Hilbert
Reply to  Benedict
1 month ago

I think they have proved they’re value for scholarship actually – at the very least , in expediting certain otherwise more time consuming parts of one’s workflow /pipeline – including literature searches , summary lit reviews with links , (notebooklm for example )

Gorm
1 month ago

FYI – why wait? You can ask the various AIs what they know about your philosophical work or views. In fact, what they told a family member of mine was not so bad.

Louis F. Cooper
Louis F. Cooper
1 month ago

Cowen writes (as quoted in the OP): With very few exceptions, even thinkers and writers famous in their lifetimes are eventually forgotten.

A lot hangs here on how one defines the vague terms “very few” and “eventually.” One could say with equal validity that the work of quite a lot of writers survives their lifetimes, especially since historians of ideas (intellectual historians, if you prefer) need to write about thinkers, some or many of whom are no longer alive, assuming those historians want to publish and compete for academic jobs, which they presumably do.

As for Cowen himself, he has a widely read blog, writes for publications (other than his blog), I think has a podcast, and probably (though I can’t quantify this) has a fair amount of influence in his discipline. That should be enough for him. I don’t really understand why people worry about whether they’re going to be read in 200 years. Some of us, though not Tyler Cowen, are worried about whether more than two people — I mean humans, not scavenging AIs — are going to read the blog post we put up this morning. (Btw, I don’t blog under my full name, so don’t bother searching.)

EWS
EWS
1 month ago

What? This isn’t high school or politics. The goal is not to win a centuries-delayed popularity contest. The goal is to say true things. It also just so happens that saying true things is a good strategy to have people repeat what you say.

Kenny Easwaran
Reply to  EWS
1 month ago

I’m not even sure if the goal is to *say* true things, so much as to get other people around you more hooked on accurate and useful things. Sometimes, you saying the true thing is good to get other people to recognize it as true and follow it. But sometimes, you saying the thing that is usefully wrong is what helps other people understand *why* it is wrong, and then in correcting it, get other people onto the truth.

Manny
Manny
1 month ago

Justin, are you getting kickbacks from the AI companies or something?

Manny
Manny
Reply to  Justin Weinberg
1 month ago

All the while supporting the narrative that AI is super important and, like it or not, central to the future and thus we always have to be talking and thinking about it …

“Bad” publicity often isn’t bad publicity.

Esteban du Plantier
Esteban du Plantier
Reply to  Manny
1 month ago

I’m as anti-AI as the next person, and I agree with a lot of where I think you’re coming from (though I don’t agree that Justin/DN is particularly pro-AI), but I don’t know how one could say that AI isn’t super important right now. Setting aside the bigger picture stuff (which is a lot to set aside), the future (which I am also skeptical about), and so on, and just focusing on our own discipline right now: AI fundamentally changed teaching by making it useless in most contexts to assign out-of-class writing. Something that we did for decades and decades as core to our job basically vanished overnight. Regardless of what you think about it (for the most part, I think it sucks), how is that not significant? At least at this moment, for us philosophers (at least those of us who teach)?

Kenny Easwaran
Reply to  Manny
1 month ago

Many philosophers have thought for decades that AI is super important to think about as a way to understand core features of what it is to be human. There’s a reason that a Google search for “”minds and machines” class” turns up classes at a dozen universities that have been running in most cases for decades.

Against AI
Against AI
Reply to  Justin Weinberg
1 month ago

Ah, maybe, then you could consider not posting wvery half-baked comment or article on AI.

Jr Phil Prof
Jr Phil Prof
1 month ago

I admit that I don’t feel much anxiety over the idea that maybe my own work won’t be read by many people in 200 years. But other than that, I’m just confused by something here. If AI is predicted to become basically the only medium through which serious people do any serious thinking, and in that world what AI is likely to produce depends heavily on what we’re allowing it to train on right now, then I’d honestly let the world die without my brilliant ideas in it. Like another poster said here, it’s only a foregone conclusion that people won’t read actual books written by actual minds if we give in to this stupid thing. And if we do give in to it, I seriously doubt there will be much left of real philosophical thinking anyway. Pumping our ideas into it isn’t making it more of an actual thinker. If that un-thinking thing is all we’re left with, then philosophy is dead.

Richard Y Chappell
1 month ago

I’m personally very happy for LLMs to train on my writing, for the same reason that I’m happy for people to read it: maybe (if I’m doing my job at all well) it will help some people to think better and more clearly than they otherwise would!

(Maybe unlikely in any given instance, but the odds improve with audience numbers. Some degree of optimism may be necessary to motivate one to engage in public philosophy efforts to begin with. As I see it, the possibility of LLM-distilled influence provides an additional avenue through which such efforts might bear fruit.)

Even if you’re unhappy that LLMs exist, or hope to coordinate some form of social or political opposition to them (whether in specific use-cases or more generally), it seems like you could still have good reason to prefer them to be better rather than worse-trained in the meantime. Like how you can support billionaire philanthropy even if you would prefer circumstances in which wealth was more equally distributed to begin with. The general principle: even if you’d prefer for something not to exist, better for it to exist in a better than in a worse form, while it exists at all.

So, in a way I share Warmke’s bafflement. But on the other hand, I’m not entirely surprised, because I was already a bit baffled by how most academics don’t seem interested in blogging (etc.) about their ideas. I guess there’s just a lot of variation in how much different academics care about the spread of their ideas.

Keith Douglas
Keith Douglas
Reply to  Richard Y Chappell
1 month ago

How do you square this with the moral certainty that they will eventually misrepresent your work in extreme ways (sometimes missing your negations, for example)? With no way to directly correct them, either?

Richard Y Chappell
Reply to  Keith Douglas
1 month ago

I think people misrepresent my work worse, tbh! Still, I think the overall effect is better than if I hadn’t put my work out there at all. So I’m not too worried about it. (The badness of some mistakes or misrepresentations is outweighed by the positive value of all the accurate-enough representations. Overweighting bads — to the point of neglecting positive value — is a common normative error worth guarding against, IMO. It surely doesn’t warrant lexical priority.)

Jaded Luddite
Jaded Luddite
Reply to  Richard Y Chappell
1 month ago

I don’t think the comparison to billionaire philanthropy is apt, for few people think that giving money to (most) charities is bad in itself, even if it would be better if the social system were such philanthropy weren’t necessary, or if philanthropy weren’t under the control of the people who do in fact control it, or if it could be better used elsewhere, etc.

A better comparison might be to, say, Prohibition agents in the ’20s: if I can’t get rid of them entirely, I at least want them to do their jobs as badly as possible and be reviled and distrusted by everyone around them.

Knibbe
Knibbe
1 month ago

Was Stanley being sarcastic or sincere? I didn’t see the post this is references when it originally came out, but I assume the former–but it’s hard for me to tell given the context. (And while I doubt he was being sincere, there is a joke waiting to be made about people “still not” reading his stuff in 200 years, meaning they aren’t reading it now!)

Justin Fisher
1 month ago

I don’t get all the haters here. I think that, starting about 30 years ago, it made obvious sense for academics to try to ensure that, as much as feasible, their work was freely available online and search-engine indexed, as that’d help it reach a broader audience and be less likely to languish in obscurity. Starting a few years, ago, it’s equally clear that another path to a broader audience and non-obscurity is to get your work in the hands of LLM’s that can help pass it on. Sure billionaire tech-bros will take a tiny cut from all these uses of information technology, but the distastefulness of that is a ridiculous reason for us to stop using it! And sure, this probably foregoes a slight theoretical chance that you might have profited more off of it, but if you went into Philosophy to profit, I’ve got some sad news for you…

Patrick Lin
Reply to  Justin Fisher
1 month ago

The problem you may be missing is that AI is killing off search engines, so this isn’t just “another path” but may very well be the main or only path for finding information in the future. So, that enriches and concentrates a ton of power in the hands of morally questionable people and companies.

Search-engine companies are at least more democratic (e.g., considers # of backlinks), which offers some safeguards for relevance and quality. They also don’t misrepresent your work or confidently tell you fabrications. They’re also more transparent (e.g., flags a search result that’s sponsored) and allow someone to find your content organically, whereas it’s far from clear that AI companies (who already have plans to slip in sponsored content) will be this transparent.

For instance: if Elon Musk doesn’t like your kind of research, he could tune Grok to never promote it, and you might not ever know. Or if OpenAI becomes so dependent on government contracts (see the recent DoD fiasco that also involves Anthropic) that it effectively cannot refuse government demands, and one of those demands is to de-platform your work, you’d also might never know.

So, there are good reasons to think that pandering to AI won’t give you access to “a broader audience and non-obscurity.” LLMs are not search engines and can actually prevent you from finding what you need online.

David Wallace
David Wallace
Reply to  Patrick Lin
1 month ago

There is nothing technological that prevents the designers of search engines from doing most of these things; indeed, I am much more skeptical than you seem to be that they in fact refrain from doing it. Search is not at all transparent and long ago moved on from passive analysis of link structure, not least because search itself rendered large collections of links much less useful.

Patrick Lin
Reply to  David Wallace
1 month ago

I assume you mean search-engine companies can also manipulate results, not that they would want to deliberately make their product worse with fabrications and fewer safeguards.

Yes, search results can be manipulated. As I said above, there are reasons to think search-engine results (which retrieve info) are more trustworthy than LLM results (which generate info).

On top of that, there’s been 30 years of experience with search engines, and none or very few have created those problems or bias. Search-engine companies also don’t need massive data centers like LLM companies do, so there are lower barriers to entry for search, i.e., more diverse competition to distribute the risk.

Search engines aren’t perfect or as transparent, un-manipulable, etc. as they could be, but they are more transparent, etc. than LLMs. And there’s great reason to distrust the major LLM providers, even Anthropic which, don’t forget, works with Palantir.

There’s really no comparison between search engines and LLMs when it comes to trust and transparency. But, sure, they’re both digital tech that can be compromised.

Kenny Easwaran
Reply to  Patrick Lin
1 month ago

You should read Jessie Munton’s paper on the epistemic evaluation of search engines: https://philpapers.org/rec/MUNAMH

Also, you should try out some contemporary LLMs, all of which include search engines of their own, and use those retrieved results in generating their own results. (I do find that they don’t do this often enough, but it’s not hard to remind them that they should, and they can often give you something much easier to learn from than a sprawling Wikipedia article and two articles in non-English languages.)

Bilingual
Bilingual
Reply to  Kenny Easwaran
1 month ago

Ah, so AI usage is justified because it lets one skip the annoying parts of learning.
I suppose my belief that certain ideas call for slow, careful, painstaking consideration is nothing but an antiquated prejudice. I see now that raw efficiency is the fundamental value to strive for. I should probably outsource all that other silliness to Anthropic!

Last edited 1 month ago by Bilingual
Kenny Easwaran
Reply to  Patrick Lin
1 month ago

Search-engine companies are at least more democratic (e.g., considers # of backlinks), which offers some safeguards for relevance and quality. They also don’t misrepresent your work or confidently tell you fabrications. They’re also more transparent (e.g., flags a search result that’s sponsored) and allow someone to find your content organically, whereas it’s far from clear that AI companies (who already have plans to slip in sponsored content) will be this transparent.”

I don’t believe that any of this is correct as a comparative claim. I think that all you are saying right now is that Google is (so far) a better-behaved corporation than OpenAI or xAI, and I expect that to whatever extent that is true, it will be reflected in comparisons between Gemini and ChatGPT or Grok.

The one bit of this that might be meaningful is that it’s much harder for a search engine, which just presents a page of links, to misrepresent work or confidently present fabrications. But if you’ve paid any attention at all to the internet over the past 25 years, you will have noticed that search results sometimes do effectively do just that – they just don’t do it in their own voice, because search engines (unlike LLMs) don’t have a voice.

Bilingual
Bilingual
Reply to  Kenny Easwaran
1 month ago

Interesting. It sounds, then, like we should expand the range of technologies whose usage we consider illicit to include any such problematically tendentious or manipulable programs. Indeed, perhaps any piece of tech that can be conclusively shown to impair or distort our ability to seek and love truth ought to be stigmatized in equal measure.

Patrick Lin
Reply to  Kenny Easwaran
1 month ago

What are your reasons for saying this?

“I don’t believe that any of this is correct as a comparative claim.”

E.g., how exactly does a search engine (as distinct from some rando’s website) misrepresent your work or confidently tell you fabrications? Search engines just point you to other sites that are returned as results.

No, I’m not just saying that Google is better behaved than OpenAI, etc. Maybe they happen to be now, but they dropped their “Don’t be evil” motto a while back and is no longer the idealistic company it started as.

What I’m also saying is that search engines retrieve info, which is more straightforward, more transparent, and therefore more trustworthy than LLMs. They’re not responsible for the accuracy of the info that lives on someone else’s site.

In contrast, LLMs generate info in a neat package for you, which may or may not be accurate, is usually presented confidently as accurate, and is not at all transparent, i.e., can be more susceptible to manipulation, like what this guy did. All this weighs against LLMs’ trustworthiness.

Regardless of trust, they’re two very different services: one is search, the other is chat.

Even if LLMs can search the web, in packaging things up neatly in a chat, they add yet another filter between us and what we’re looking for. That strongly seems like a bad thing, if we care about accuracy and a wider set of results returned. As the old TV line goes, “Just the facts, ma’am.”

Anyway, I’d say the world would be worse if search were degraded, which it is by virtue of having so much of its traffic diverted by LLMs. If you agree with that, then we’re not really disagreeing on the big picture.

Bilingual
Bilingual
Reply to  Justin Fisher
1 month ago

Speaking for myself: the fundamental issue is not garnished wages, or some such nonsense. I find the proliferation of AI disturbing because of a basic intuition that my ability to think is intrinsically valuable. My intellectual life, my pursuit of wisdom, is one of, if not the most authentic expression of my humanity. I am therefore repulsed by technologies which would presume to do my thinking for me, to handle the work of learning, of exegesis, of critical and creative thought on my behalf. That is an affront to the reflective mind, and indeed the wound burns doubly hot when inflicted by people who call themselves professional lovers-of-wisdom.

And if you think AI-induced stultification is not already happening at alarming levels, I invite you to interact with a random sampling of undergraduates for any amount of time.

Jaded Luddite
Jaded Luddite
1 month ago

It’s not the case that I don’t want LLMs to be trained on my writing because I won’t get paid royalties. Rather, I don’t want LLMs to be trained on my writing because I think my writing is good, and I want to do everything I can to make LLMs worse rather than better, so that people trust them less and their role in society diminishes.

puzzled
puzzled
1 month ago

One additional facet is the intellectual property system. Things like when works enter the public domain and which educational uses are permitted under copyright make a difference to the circulation of ideas (including within higher education). A major question is whether the response to AI is doing more harm than good by bolstering the case of groups that ultimately want to weaken public domain and fair use/dealing. See the law article “The AI Copyright Trap” by Carys Craig for this line of thought: https://digitalcommons.osgoode.yorku.ca/all_papers/391/. A lot of this particular issue boils down to whether AI access to publications constitutes copying or something else.

Sebastian Ruiz
Sebastian Ruiz
27 days ago

It has always been so interesting how AI systems are able to process a body of work and replicate, if not improve, the given entry. It tends to produce a level of quality that some may consider beyond their own original skillset. I believe if AI systems can’t properly credit original thinkers, ideas will definitely fade faster. Like the entry mentioned about permanence with ideas, making a mark equivalent to carving it into stones or printed texts is crucial for the longevity of mentioned ideas. The topic of influence is also important.