Against “Throwaway Culture” in Philosophy


Throwaway culture refers to a culture in which the consumption and production of many goods is based on the practice of discarding them after just one or a few uses.

Is there an analog of throwaway culture in the realm of ideas?

Alexander Douglas (St. Andrews) thinks so. In a post at his blog/newsletter, he notes the observations of historian Peter Burke (Cambridge) about this:

Old editions of encyclopaedias, for instance, are pulped; the paper is rarely recycled, and likewise the knowledge contained. Even digital editions are often deleted to clear storage space or sent to some inaccessible archive. Sometimes newer editions reuse or update older entries, but just as often new entries are written from scratch, with little or no consultation of the past.

To prove that there is a real loss of knowledge involved in this waste, Burke documented how certain entries shrunk over time—the Encyclopaedia Britannica entries on King Charles I, Charles V, Raphael, Cicero, Goethe, Luther, and Plato, for example, grew much shorter over time, reflecting a declining public interest in Christianity and classical culture. The knowledge contained in the longer entries might end up being as lost as the Serapeum of Alexandria. Unlike any educated Victorian, you might have to look up what that is, but once you do you’ll see the point: public interest is fickle, and a time will likely come when we wish we hadn’t thrown away so much knowledge on then-unfashionable topics.

What about in philosophy? Douglas writes:

In philosophy, I think our throwaway culture results in a lot of wasted time and effort. Large research grants are awarded to researchers who use a priori methods, or consultation of only the most recent sources, to come the long way around to ideas, arguments, theories, and insights that are abundantly documented and discussed in historical texts. Vast sums are then paid to predatory publishing houses to digitally print these redundant results. It is like watching a government spend billions on new buildings, expanding cities into ecologically fragile hinterlands, while existing buildings in the centre of the city are abandoned to dereliction rather than being restored and updated at a fraction of the cost…

What’s an example of this phenomenon?

In the early days of analytic philosophy, a lot of time and effort went into “discovering” the various consequence relations that might hold between propositions besides material implication. A mountain of literature was produced—shiny new buildings on the edge of town—while Buridan’s Treatise on Consequences sat unread. Almost nobody thought to check what might be salvageable in this or hundreds of other medieval works on the topic. Since most philosophy degrees tended to divide the history of philosophy into “Ancient” and “Modern”, with the intervening millennium and a half unmentioned, the whole medieval tradition lay as unseen as an underground landfill.

I suspect some readers have other examples in mind.

Throwaway culture in philosophy is “self-defeating,” Douglas says.

What does anyone write philosophy for, if not to have it used and developed in the future? What is the point of writing it at all if the cultural norms of academia guarantee that in fifty years time the most anyone will say about your work is the box-ticking footnote, “(Smith 2025) is a pioneering study of the topic”, and in a hundred years time nobody will know it at all?

Douglas thinks that the lesson here is that the discipline should “cultivate a more respectful attitude towards past ideas.” Though he doesn’t put it this way, it seems the idea is that the history of philosophy, far from a backwards-looking drag on progress in philosophy, is instead crucial for making sure such progress is not merely apparent.

You can read the whole post here.

guest

35 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Kenny Easwaran
9 months ago

Obviously, to never read anything old and always try to re-derive all of one’s results is going to result in a lot of wasted effort.

But it’s also true that to try to read everything that might be relevant and see whether any of it might anticipate what you are doing, before writing any of it, also also going to result in a lot of wasted effort. This is one widely-recognized failure mode for grad students at the dissertation phase – to spend too much time reading and not enough time writing.

Even ignoring the “wasted effort” angle, and just looking at the quality of the new production, there’s again a reason to avoid both extremes. If you don’t read enough of the past work, you might fall into the same problems that were diagnosed in the past and miss some important innovative responses that might be useful. But if you read too much of the past work before writing anything, you might get stuck in the paradigms they were working in, and get caught up in responses to issues that were once thought to matter but don’t actually matter. It’s always valuable for people to write down some of their own ideas for thinking about a question before reading existing literature, talking to anyone else, or consulting an AI – even if most of these first thoughts are worthless, this is the stage at which radically different ideas are most likely to enter the literature.

I think most of this is uncontroversial – people should read some, but not too much, of the existing literature, before publishing their work. The question is just whether there’s too much of an imbalance one way or the other.

A slightly more subtle point is that it’s probably best for there to be many people who work at different points on this spectrum – there is no one ideal balance such that everyone should strive for it. (Maybe there’s an ideal balance that makes the individual most likely to hit on the truth, but the community will benefit more from diversity on this spectrum.)

Thinking about the example of the (non)influence of Medieval logic on the 20th century analytic tradition – is it really worse that these connections were discovered later, after related ideas had been independently invented, rather than 20th century figures having felt like they were pushing old ideas in somewhat new directions?

Another Philosopher
Another Philosopher
Reply to  Kenny Easwaran
9 months ago

I also don’t think it is an example of throw-away culture if people are ignorant of what they are supposedly throwing away—was it really reasonable to expect early analytic philosophers to know the contents of hundreds of medieval works” written in an idiom and style unfamiliar to them?

An adjunct
An adjunct
9 months ago

adjuncts = throw-away colleagues

Benj
9 months ago

Anyone know what he has in mind by this:

“In the early days of analytic philosophy, a lot of time and effort went into “discovering” the various consequence relations that might hold between propositions besides material implication … Buridan’s Treatise on Consequences sat unread”

Is the claim that Bochvar, Lesniewski, Lukasiewicz, Gödel, Kleene, Anderson, Belnap, Dunn etc could have rocketed straight to more interesting stuff if instead of wallowing around in foundations they’d only bothered to find everything they “contributed” in Buridan?

Charles Pigden
Charles Pigden
Reply to  Benj
9 months ago

I think that what he means is that early analytic philosophers (probably NOT including the Poles) were unduly fixated on the material conditional and didn’t have a very good concept of the consequence relation until the advent of Tarski.. People like C I Lewis who complained about this were sometimes reduplicating the work of mediaeval logicians such as Buridan and could perhaps have done with helping hand from the past. As for the early analytic philosophers themselves, if they had been less ignorant of the the past they would have been less likely to make these mistakes. These don’t seem like surprising or controversial claims to me. But then many of the philosophical logicians that I tend to read are soaked the history of the subject. 

Charles Pigden
Charles Pigden
Reply to  Benj
9 months ago

Bobzein’s research which is cited below by NicholsDenyer, casts a new light on these issues. If she is right – and her paper seems pretty convincing – Frege’s invention of the propositional calculus was due to his reviving, rethinking and mathematicizing the propositional logic of the Stoics, and this might not have happened had he not read Prantl, a usefully pedantic historian of logic who does not rate a mention in Beiser’s history of 19th Century German Philosophy. It was because Frege was interested in the history of logic that the propositional calculus was invented in the first a place.  This was probably not true of Russell’s reinvention of the propositional calculus in so far this this was arrived at independently of Frege. (Russell says that he did not really understand the Begriffschrift until he had independently discovered ‘most of what it contained’, though we can’t rule out some unconscious influence as his initial reading of the Begriffschrift fructified in his brain.) So can an acquaintance with the history of logic be useful or even vital to the working logician.  Answer, yes. 

Charles Pigden
Charles Pigden
9 months ago

I am in broad agreement with the OP’s critique of throwaway culture but I don’t think that his chief example is an apt illustration of his thesis. 

It may be that analytic philosophers from the pre-war period such as Russell, Frege and Carnap were unfamiliar with their medieval predecessors but this is emphatically NOT true of the of the leading philosophical logicians after WWII. Prior and Geach had a keen interest in the History of Logic, George Hughes and Max Cresswell wrote extensively about Medieval Logic, paraconsistent logicians such as Richard Sylvan/Routley and Graham Priest have written plenty about the history of logic, Graham having an interest not only in European Medieval Logic but also in but also in the logic of the Indian subcontinent. One of the brightest stars in philosophical logic today – Catarina Duhtil Novaes, winner of the Lakatos prize – started out as a historian of Medieval Logic. Even in the pre-war period, I suspect that Polish analytic philosophers will have been reasonably well acquainted with Medieval logicians. 

But though the philosophical logicians of the early analytic period, Russell, Carnap and (perhaps) Frege, were (when young) indeed fairly ignorant of Medieval Logic, this is NOT because they were in thrall to a throwaway culture and ignorant or contemptuous of the History of Philosophy (Russell in particular being well-read in this area, as in many others). The reason is that they were raised on the wrong histories, specifically the histories, often Kantian histories, which celebrated the Early Modern Period as a break with benighted Aristotelian past. Early Modern Philosophers were generally contemptuous of formal logic because they thought that the conclusion of a logically valid argument is contained the premises and that therefore logic can teach us nothing new. Hence the need for a Novum Organon which could take us not from old truths to the unrecognised implications of those old truths, but from old truths to new truths via some ampliative method of enquiry. Indeed the Enlightenment was a Dark Age for formal logic since they tended to smother and/or forget nearly everything the had been done in logic since Aristotle. This attitude, I suspect, was carried over into 19th Century Histories of Philosophy. Russell, Carnap and Frege were ignorant of Medieval Logic not because they were contemptuous or ignorant of the History of Philosophy but because the histories on which they raised (histories mostly written in he 19th Century) were ignorant and contemptuous of part of that history. It is even a bit misleading to characterise the Early Modern Philosophers as having a throwaway culture. They were not generally in favour of throwing away the past – rather there were very specific relics of that past – relics that they took to be obstacles to progress – that they wanted to throw away. Russell himself came to appreciate that the early modern philosophical revolution involved loss as well as gain. The following comes from an appreciative tribute to Ockam (1949) ‘And so, in the 16th and 17th centuries, the full flood of humanism in science overwhelmed not only the subservience to authority which marred the scholastic philosophy, but also subtlety and care in delicate distinction for which at [that] moment the world had no use.’ 

That is, so to speak, a pan-Enlightenment point. But with Russell, Frege and Carnap I suspect that there was something else at work. I think it is significant that they all hale from Protestant parts of Europe. Protestant culture (and post-Protestant culture) was at least mildly hostile to Catholic culture and its products, one of which was Medieval Logic. Here is Hume on the medieval universities. ‘Yet such was the ardour for study at this time, [the1360s] that Speed in his Chronicle informs us, there were then 30,000 students in the university of Oxford alone. What was the occupation of all these young men? To learn very bad Latin, and still worse Logic.’

Thus the early analytic philosophers’ ignorance of Medieval philosophy was due to an Enlightenment-inspired ignorance and perhaps a Protestant-inspired ignorance, but not, directly, to a throwaway culture. 

Charles Pigden
Charles Pigden
Reply to  Charles Pigden
9 months ago

This enlightenment hostility to formal logic as the product of a Catholic culture persisted into the Twentieth Century. Witness this letter from Neurath to Carnap (1943)

‘I am really depressed to see here all the Aristotelian metaphysics in full glint and glamour, bewitching my dear friend Carnap. As often, a formalistic drapery and hangings seduce logically-minded people as you are very much … It is really stimulating to see how the Roman Catholic Scholasticism finds its way into our logical studies which have been devoted to empiricism … [Brentano and the Polish philosophers] begot now Tarski etc and they are God fathers of OUR Carnap too; in this way Thomas Aquinas enters from another door Chicago [where Carnap was working a the time]. (Neurath to Carnap 15/1/43, quoted in Mancosu 2008: 196)

The oddities of Neurath’s style are due to the fact that he was writing a trans-Atlantic letter to Carnap in his good-but-not-perfect English in order to avoid problems with the censor.

Nicholas Denyer
Nicholas Denyer
9 months ago

See Frege plagiarized the Stoics, by
Susanne Bobzien for Frege’s debts to some of his predecessors. https://philarchive.org/archive/BOBFPT

Charles Pigden
Charles Pigden
Reply to  Nicholas Denyer
9 months ago

This is really interesting and I thank Nicholas for the pointer. If Frege borrowed extensively from Stoic logic, as Bobzein seems to prove, then even if he HAD been aware of Medieval Logic it might not have been of my much use to him, since in the main, the Medieval Logicians rejected lekta, thoughts (in the Fregean sense) or propositions – the supposed entities that are expressed by what people say or think – in favour of propositiones, the things that people actually say or think. This raises an interesting question. Given that he borrowed extensively from the Stoics (as described by Prantl), why didn’t he acknowledge the fact? In this he differs markedly from Russell who was generous, indeed over-generous, in acknowledging his intellectual debts. 

Sara
Sara
9 months ago

Talk of philosophy in terms of “wasted effort” and what is basically a framework that values maximising efficiency seems silly to me. Is the point of philosophy to deliver groundbreaking, never-before seen empirical results? Should we calculate what the ROI is for investments in philosophy too? That would be a bleak affair. I guess that as always the contributions to this discussion will boil down to what one thinks is the purpose of philosophy.

Dick
Dick
9 months ago

Perhaps, in the future, historians will explain how, at some point, philosophers stopped learning Latin and reading what had been written before, and instead began trying to become “pioneers” in some sub-sub-sub-discipline—because without being pioneers and publishing a disproportionate number of useless papers, they had no chance of finding a job or research funds.

Matt L
9 months ago

I like and have learned a lot from reading all over the history of philosophy, but at the same time, I have lots of sympathy with Collingwood’s remark that “The history of political theory is not the history of different answers to one and the same question, but the history of a problem more or less constantly changing, whose solution was changing with it.” Insofar as that’s right for some area of philosophy (I don’t think it’s 100% right for any area, nor 100% wrong for any) then it seems to me to put a limit on the validity of the claim made here. Of course, it’s not that we can’t learn from the past, but it would also be a mistake to assume that people in the past are trying to answer the same questions that vex us, and if they are not, if we are not to be “mere scholars”, we can sometimes learn more by, if not forgetting, then at least not focusing on the past.

Charles Pigden
Charles Pigden
Reply to  Matt L
9 months ago

I would say that the history of political philosophy and the history of philosophy in any given domain consists of different answers to a set of historically related questions that are not as dissimilar as Collingwood tends to make out. Philosophers of the past have different starting points and often face different problems but once you make the effort to understand, what they are doing often makes sense, even if it is sometimes rather repellent. It is not very often that I experience that sense of alienness that others obviously feel when confronted with philosophers of other times. Indeed I am often struck by the way that people, often from widely separated cultures, when faced with similar problems make similar moves. Perhaps this is why I find that reading dead philosophers (both the great and the not-so-great) is often intellectually stimulating.

To use Collingwood’s own analogy, the difference between (say) Sextus Empiricus and Descartes in skeptic mode is a lot less than the difference between a trireme and a seventeenth century ship of the line.

Shay A Logan
9 months ago

A mountain of literature was produced—shiny new buildings on the edge of town—while Buridan’s Treatise on Consequences sat unread.”

The fact that a wheel was reinvented does not mean we did something inefficient. It is sometimes the case that reinventing a wheel is a silly thing to do. But I don’t think Buridan (or medieval logic more generally) is such a case.

I’m going to grant (because I think it’s actually true) that there are bits of logic that we put lots of effort into learning that we could have learned by instead reading Buridan. It does not follow from this that we went about learning these bits of logic in an inefficient way. To judge whether it was inefficient, we have to evaluate the overall costs of the two ways of coming to the knowledge that we could have found in Buridan that are on the table.

Consider the costs *for me*, then. I’m a logician. Let’s suppose (again, because I think it’s likely true) that there are logical things I could learn from Buridan. To learn them, I’d first have to learn to read medieval latin. I’d also probably have to develop archival skills and translation skills and whatnot. Alternatively, I could use the skills I’ve already got and some elbow grease to get there on my own.

In almost all cases, the second way is likely (again, for me) to be more efficient. The years and years it would take me to be able to (really, seriously) read Buridan are almost certainly better spent just doing the logic. Are there exceptions? Probably! But I’d expect that they’re pretty rare because I can get *a lot* done in the amount of time it would take me to learn to read Buridan.

The point I’m trying to make is that learning to read Buridan in a serious way is very very expensive and this cost has to be included when making claims like the one quoted at the top.

(It’s also worth noting that the skills involved in (re)inventing and the skills involved in discovering lend themselves to extending things in very different ways. If I abandon being a logician and spend the next half-decade plus learning to be a historian and *then* encounter some neat thing in Buridan, my instincts about what to do with it are going to be very different (note: not inferior, just different) than what I’ll do with it if I get there by plugging away with my logician’s toolkit. So it seems likely to me that both having folks remembering and having folks reinventing is going to quite often be a good idea.)

Shay Allen Logan
Reply to  Shay A Logan
9 months ago

I see I’ve reinvented some of Kenny’s points here. Not sure if this is one of those cases where it was a good thing.

Preston Stovall
Preston Stovall
Reply to  Shay A Logan
9 months ago

Consider the costs *for me*, then. I’m a logician. Let’s suppose (again, because I think it’s likely true) that there are logical things I could learn from Buridan. To learn them, I’d first have to learn to read medieval latin. I’d also probably have to develop archival skills and translation skills and whatnot. Alternatively, I could use the skills I’ve already got and some elbow grease to get there on my own.

The point I’m trying to make is that learning to read Buridan in a serious way is very very expensive and this cost has to be included when making claims like the one quoted at the top

Agreed with what you say about the two poles of remembering and reinventing styles of research, and that there’s a trade-off between being a historian of logic and being a logician. But one can “read Buridan in a serious way” without reading him in Latin, so the trade-off isn’t exclusive, no?

Apologies for the self-insert, but I learned a lot from reading Paul Vincent Spade’s online book on medieval logic when I was a graduate student, and then by reading widely but leisurely in primary sources and the secondary literature in the history of logic, and with few exceptions in anything other than English. My research has been substantially informed (I pick that phrase carefully, though tongue-in-cheek) by a reading across this literature, and today I publish on topics in model theory and proof theory — which, I maintain, are the precisified analogues to theories of denotational and connotational semantic content that stretch back at least to medieval Supposition Theory. I’d say what I’m doing counts as reading seriously in this literature, and in a way that adds breadth and depth (to use terms associated with Aristotle’s discussion of these two kinds of meaning) to my work. There’s also a sense in which I’m avoiding re-inventing a wheel with this reading of the history of logic, but I’ll set that aside.

So while there is no doubt a cost associated with devoting time to studying the history of logic, measured against what one might otherwise gain from sticking to the current and recent literature, there are ways of reading the history of logic that contribute to the current literature without requiring (e.g.) that one learn Latin.

Finally, for whatever it’s worth, I don’t share Douglas’s assessment that the history of philosophy was so neglected by early analytic philosophers (with a few exceptions), or that it’s so neglected now — and as Benj and Charles Pidgen point out above, it’s certainly not true of early analytic philosphers in central Europe, or of the generation that produced C.I. Lewis in the U.S.

Shay A Logan
Reply to  Preston Stovall
9 months ago

Ok, fair. Maybe complaining specifically about learning Latin is taking things too far.

Still, I’d imagine you’re willing to grant that the ability to read this sort of thing in a way that’s fruitful is a pretty non-trivial thing to learn. Even translated, medieval scholarship is tough stuff. Learning to read it seriously requires a fairly large amount of work.

Again, I don’t mean any of this to suggest that reading the medievals ought not be done or is a silly thing for logicians to do. I just chafe at the claim that since we could have found thus and such by reading (e.g.) medievals, reinventing it was somehow wasted effort. Putting absolutely everything else aside, that only follows if the work reinvention asks of us is, in sum, greater than the work finding it in the medievals asks of us and I think very often that’s just not the case. Sometimes it just really is more cognitively efficient to reinvent things.

Preston Stovall
Preston Stovall
Reply to  Shay A Logan
9 months ago

Agreed that learning to read the medievals on logic is a non-trivial thing, no matter the language, and no doubt it’s sometimes more efficient to reinvent what they may have already discovered. But of course contemporary work can often require similar effort, particularly when one is starting out; I spent the better part of a month one summer at the start of my dissertation just reading David Lewis’s Counterfactuals, which is as as short and concise a discussion of the possible-worlds interpretation of counterfactual conditionals as one could want. It was tough going, and I had to put other things aside for a while, but doing so both facilitated my understanding of the topic in a way that was much more efficient than attempting to work it out myself, and helped me get a better sense of the contemporary landscape surrounding so-called “intensional semantics”, where “intension” has become co-extensional with “function from world to extension”.

At the same time, while I was reading across the history of logic (again, mostly in English) I was beginning to appreciate how historically parochial this notion of an intension is, how it compares to pre-Carnapian notions of connotation, and how much the scholarly landscape on these topics has been shaped by the way possible-world semantics came together (particulalry at UCLA in the late 1960s) and spread across philosophy, linguistics, and related areas in cognitive science over the next half century. There were costs to this reading, of course, just as there were costs to spending so much time on Lewis’s work. But it was no less important in my coming to whatever understanding I’ve had of these things. My sense is that, at least in philosophical logic and semantics, there’s a lot we have to learn from paying more attention to the history of the field.

Now of course in the contemporary case we anticipate that the scholarly literature surrounding a subject of research will, for the expert, be known well enough that one wouldn’t be so foolish as to go reinventing what’s already out there. A philosopher or formal semanticist interested in understanding counterfactuals today, but unaware of the large literature on this topic, wouldn’t be doing serious work if they set off to blaze their own trail. And that is because, no matter what area one works on, the expert is expected to be familiar enough with the surrounding literature as to know where to turn to bring oneself up to speed. And there is so much work being done at the forefront of logical research today that most people have quite productive careers without needing to have any but the most basic familiarity with the history of the subject.

Furthermore, as you say, it’s often simply more efficient to work out how some notion is or can be situated in the contemporary landscape, without regard for different historical traditions on the topic. Doing that historical work well requires immersing onself in radically different conceptual frameworks, and that carries opportunity costs. Perhaps it is enough to have this kind of knowledge distributed across the field, with some people generally familiar with the history of logic by knowing at least the broad outlines of how it developed, what the choice points were for different research paradigms, and how those choices shaped subsequent research. And acquiring that familiarity doesn’t seem much more difficult than becoming familiar with a topic like possible-worlds interpretations of the counterfactual conditional, it seems to me.

At any rate, I’m basically thinking out loud now so apologies if this isn’t addressing what you meant to be saying, about which I think I’m in agreement.

Kenny Easwaran
Reply to  Shay A Logan
9 months ago

Mathematical work is an interesting case. Most mathematicians I talk to agree that “reading” a mathematical proof usually consists in sitting there mystified at some of the symbols the person has put on the page, going to a chalkboard and thinking about how you would go about trying to prove this, getting a start, and then looking back at the page and seeing that now you can understand that what the person wrote is exactly what you came up with. “Reading” and “reinventing” often aren’t all that separate in mathematics!

Of course, then you notice, “why does that person have a division by 2 in this formula?”, and realize you’re double-counting things, so reading really does speed your way through the reinvention. But you often can’t get going without some of your own reinvention.

When it comes to medieval logic in particular though (and many other areas of mathematics that have multiple applications) sometimes there’s value in a relatively naive reinvention, where you come to what amounts to the same subject in a different application. Because of the different source, you’re responding to different objections, and comparing it to different alternatives. When someone finally realizes the connection between the two, they can often make more progress because of the two independent inventions than someone who just discovers the old one and applies it to the new one without thinking through the issues that came naturally in their context.

Steve
Steve
9 months ago

There is some fascinating discussion in the comments here about the (alleged) inefficiencies of philosophy. One thing people haven’t picked up on, though, which is also interesting, is Burke’s comments on lost knowledge (such as the shorter entries on various people in the Encyclopaedia). I was initially convinced this was a bad thing; even if we think it’s not necessarily a bad thing to reinvent the wheel, it seems a good idea to keep the old wheels around. But, on reflection, I’m less certain; plausibly, there’s an upper limit on how much information an encyclopaedia (or any repository) can contain. So, it could be that the loss of, say, Plato-related information is a side effect of including other information, say on the medieval Indian tradition, given a total word limit. In that case, maybe the ‘loss’ would be outweighed by the ‘gain’. But I’m not certain what to think here, and have an awful feeling I’m probably retreading some Enlightenment era debate without realising, which would, in context, be rather ironic…

Kenny Easwaran
Reply to  Steve
9 months ago

People make the claim that Wikipedia has no word count limit. But they do enforce a “notability” requirement on entries (even if at a lower level than paper encyclopedias).

At least since the start of the Google era, it’s become clear that there’s a problem of not being able to find relevant information inside a huge collection, distinct from the problem of the information not being there, with similar effects, but opposite causes.

Steve
Steve
Reply to  Kenny Easwaran
9 months ago

Yes, I think that’s right. I mean, it’s worth thinking a bit more about Burke’s concern about shortened entries in the Encyclopaedia Britannica. In principle, there’s no reason why the entry on Plato (or Charles V, or whoever) couldn’t be shorter in the most recent edition, but the information still available somewhere in an online repository. In fact, I’d bet a lot of money that’s the case. So, strictly, his concern can’t be that the information is lost, but that it is lost from a particularly easy-to-access or high-profile location. But, given resource constraints, that’s a different sort of concern from just “losing” the information. I’m not saying his concern is wrong, but I do think it’s really interesting to think about this difference; at least in principle, it seems to me that there is a really large difference between, say, burning a pile of books and just placing them in difficult-to-access storage; on the other hand, I guess I can see how the two may seem functionally identical.

Marketeer
Marketeer
9 months ago

Of course, philosophy sometimes reinvents wheels in the pursuit of novelty in frustrating ways. At the same time, I want to note that there is value in working through a topic for oneself to one’s own satisfaction without wading through everything that has been or might have been said on the matter. Yes, this can lead to “discovering” insights that are in fact quite old. And yes, this can mean that your work will not speak to future generations, who are off thinking things through for themselves from within their own interpretative frameworks and problematics. But that’s OK. For me, much of the value and point of philosophy is to quell my own worries to the best of my abilities by thinking things through for myself — not to speak to the future nor necessarily even to speak from stop the shoulders of the giants of the past.

Last edited 9 months ago by Marketeer
Marketeer
Marketeer
9 months ago

With apologies for double-commenting, I also want to note that I’ve yet again been deceived by a post title that pointed (at least, so I thought) in quite a different direction from that taken by the piece itself.

While reflecting on whether philosophers ought to be more historically informed and deferential to the past is valuable, I expected to read a different discussion entirely based on the title. When I hear “throwaway culture in philosophy,” I think of the stacks of papers that junior folks and grad students are now expected to publish in order to secure employment, many of which — and by no means excepting my own — are only marginally valuable (if that) contributions to debates several layers deep in the dialectic of an issue within a sub-sub-sub field, advancing views the authors themselves may not even find convincing, interesting, or valuable. In a field where we all know what “minimal publishable unit” means, this seems like a far more fitting issue to discuss in terms of throwaway culture.

Last edited 9 months ago by Marketeer
Hugo Heagren
Hugo Heagren
Reply to  Marketeer
9 months ago

A possible diverting alternative viewpoint:

I, a graduate student, have recently published two papers which would probably count as:

only marginally valuable (if that) contributions to debates several layers deep in the dialectic of an issue within a sub-sub-sub field, advancing views the authors themselves may not even find convincing, interesting, or valuable

At the beginning of my grad career I was aware of this problem (in a slightly far-off) way, in that I wanted to change the world and write Big Important Books, but was told that if I wanted a job I should start by publishing manageable papers as quickly as possible. I quickly realised that this is easiest far down inside a sub-field (both my papers are objections to others’ views).

Then I published this and this.

Looking back, I actually think these two papers are much more worthwhile than I-two-years-ago or you(-now) might think. Two things happened:

  1. In actually publishing them, I came to realise just how hard it is to write even a relatively straightforward objection paper. Getting them both from good but rough ideas to publishable, and from there to published, definitely made me a better writer. True, doing the same thing five more times probably wouldn’t have the same marginal effect, but these two definitely made a real difference.
  2. In researching them, I read many such ‘marginal’ papers, and they were useful. Not everything I read made it into the bibliography, but the minutiae of work in these areas often helped me sharpen the questions I was asking.

Sure, neither of my papers will end up in a collection of seminal early-21st century philosophy. But: w.r.t 1, we could take the value of papers like this to be as a sort of training exercise for grads and early-careers. You have to learn somehow, why not like this? W.r.t 2 these papers are worthwhile, just not in a huge paradigm-shifting way. Sometimes you don’t need a big, sweeping idea, you need something really specific, and the marginal papers are good at that. I fully expect that the papers I published will be of the same (small but meaningful) use to others in the field.

Food for though I guess.

Charles Pigden
Charles Pigden
Reply to  Hugo Heagren
9 months ago

The problem with this argument Hugo, is the although it provides a good reason for writing the kind paper under discussion, it doesn’t give a reason for anyone else to either read it or publish it.

Charles Pigden
Charles Pigden
Reply to  Marketeer
9 months ago

Okay, turning to the topic of papers that deserve to be thrown away, the phenomenon that Marketeer describes is obviously undesirable. So the question is what are the incentive structures and/or internalised norms which drive this kind of intellectual overproduction? Here are some suggestions. 

1) Graduate students and recent PhDs have to maximise their publications, preferably in leading journals, if they are to have any hope of getting a job. They must publish or perish. 

2) People think that the best way to do this is to produce tight little papers in their narrowly defined research niches, papers which are ‘are only marginally valuable (if that) contributions to debates several layers deep in the dialectic on an issue within a sub-sub-sub field, advancing views the authors themselves may not even find convincing, interesting, or valuable’.

3) This is the kind of paper that people at (North American) graduate schools are often taught – or at least encouraged – to write.

 4) The widespread belief that the best way to maximise one’s publications is to write this kind of paper is in fact true, since these are the papers most likely to get accepted. 

Thesis 1) is almost certainly correct. Furthermore it isn’t clear as that we can do anything much about it. The alternative to shortlisting people on the basis of their publications, is to shortlist them on the basis of imponderables such as promise or potential which will often boil down to the prestige of the candidates’ institutions or letter-writers. This would make the thing even more obnoxiously elitist than it is already, with a classist bias in favour of those who can afford to get into the top schools. HOWEVER a possible way to to fix or at least minimise the problem would be to adopt a convention whereby job candidates were ranked on the basis the abstracts of their top two papers.. This would reduce the incentive to privilege quantity – even prestige-of-venue-adjusted quantity – over quality. Search committees would then sift through candidates on the basis of two abstracts each, which would make the shortlisting process reasonably manageable even with a couple of hundred submissions. This would tend to privilege non-boring and non-trivial papers, since these would be the ones the ones that would catch the search committees’ eyes. But this convention would have to be adopted universally to achieve the intended result. [Please note. I have changed my mind about this having previously been in favour of venue-adjusted and time-from-the PhD-adjusted quantity as the chief criterion for search committees when trying to arrive at a shortlist.) 

Somewhat to my surprise, thesis 2) is also correct.  When I point out that I have had reasonably successful career publishing on a wide range of topics outside my then-AOSs, what I get told is that that was then and this is now and that times have changed. Young philosophers nowadays couldn’t get away with the kind of adventurous paper that I managed to publish all those years ago. This widespread response does not prove that thesis 4) is correct and that turning out tight, dull little papers is the best way to get published, but it does show that this claim is widely believed

Notice that factors 1) and 2) when taken together are sufficient to generate the phenomenon. If people rightly think that they must publish or perish, and if they believe that turning out dull little throwaway papers is the best way to get published, then they will turn out dull little throwaway papers.  

This is all the more likely if 3) is correct and this is the kind of paper people are taught to turn out. And from what I am reading on the blogs, 3) is indeed correct. There are no doubt exceptions, but clearly many young philosophers feel that narrow niche-confined papers are the sorts of papers that they are supposed to write. Now, I have long suspected that the North American system of taught PhDs sometimes has an infantilising effect, encouraging a culture of timidity and subservience.  Narrow, unadventurous little papers are a likely consequence. Thus a radical solution might be to abolish the taught PhD in favour less time-consuming thesis-only doctorates which tend to foster a more independent turn of mind. However I have argued for this before on Daily Nous (see Why Do Philosophy PhD Programs Even Exist?) and have been met with a deafening silence, so I assume that this is not a goer. Thus the only remedy that I can think of is to somehow change the research culture in the delinquent departments. However, this is unlikely to happen if 4) itself is true.  

Is 4) true? Perhaps. I have sometimes recommended papers on the grounds that they were competent and reasonably well-written pieces well up to the usual standards the distinguished journal in question, but in making my recommendation I was led to doubt the standards.  As I put it once in a referee’s report: ‘My worry is that if this is a paradigm case of a good philosophy paper, then there is something radically wrong with the paradigm.’ How to fix this? Well one solution might be for referees and and editors to adopt the convention that it is okay to reject a paper simply because it is boring or more specifically because it is ‘an only marginally valuable (if that) contribution to a debate several layers deep in the dialectic on an issue within a sub-sub-sub field, advancing views the author themself may not even find convincing, interesting, or valuable’. A criterion for rejection might be that the reviewer would not have bothered to read it if they had not been obliged to do so in their capacity as a referee. If such criterion were applied, there would, I suspect, be a massive uptick in rejections. However, if it were combined with the convention for search committees suggested above, it would incentivise young philosophers to write fewer but more interesting papers. 

So accepting that Marketeer is right about the problem, this is my tentative solution. 

A) Search committees generally should adopt the convention of ranking candidates on the basis of the abstracts of their two top papers, ignoring or downplaying the quantity of publications even if this adjusted for prestige-of-venue and time-out-from-the PhD. 
 
B) Editors and reviewers should feel free to reject dull papers simply because they are dull. 

Preston Stovall
Preston Stovall
Reply to  Charles Pigden
9 months ago

I’d love to read a philosophy journal where this was weighted heavily by editors and reviewers:

A criterion for rejection might be that the reviewer would not have bothered to read it if they had not been obliged to do so in their capacity as a referee. If such criterion were applied, there would, I suspect, be a massive uptick in rejections. 

Marketeer
Marketeer
Reply to  Preston Stovall
9 months ago

Likewise. I think there’s a real appetite for that…among readers. The problem would be getting people to submit their work to such a journal for all the predictable market-related, tenure-related, and prestige-related reasons.

Michael Kremer
Michael Kremer
9 months ago

I am all for the value of studying the history of philosophy. But one should not forget the actual historical conditions under which philosophy until recently was produced.

Thus, there is something strangely anachronistic in the argument about the early analytic logicians neglecting Buridan. The argument assumes that Buridan’s text was available at the time to be studied by those logicians… but was it?

As far as I can tell it wasn’t translated into English until the 1980s. Furthermore the first critical edition of the Latin text appeared in 1976. The introduction to that edition (in French) makes clear how difficult it was even in the 1950s to access the medieval logical tradition. In particular the only texts of Buridan’s treatise available at that time were little-known codices found in libraries in Florence, Liege, and the Vatican, and a 1495 incunabulum (early printed edition) which also appears to be rare (worldcat shows 14 libraries that hold it worldwide, but many of these turn out to be links to a digitized version which of course did not exist in the early 20th century).
You can read the introduction here: https://books.google.ca/books?id=Q3V2nQEACAAJ

Alexander Douglas seems to be assuming that Russell, Whitehead, Carnap, or C I Lewis could have just gone to their local libraries and read Buridan. That does not seem to have been the case.

Charles Pigden
Charles Pigden
Reply to  Michael Kremer
9 months ago

Quite so. Late 20th and early 21st Century logicians, who are not themselves historians, can easily acquire the degree of historical literacy that they need to know where to look for ideas and inspiration because of pioneering histories of logic such as Bochenski’s History of Formal and Kneale and Kneale’s The Development of Logic. (This echoes Preston Stowell’s point that nowadays a logician does not need to devote all that much effort to getting up to speed with Buridan or indeed many another.) But these pioneering histories were not published English until 1961 and 1962 respectively. And it is not clear that there was was anything very useful in German either. Prantl’s bizarrely hostile chapter on the Stoics may have inspired but Frege, but Bochenski is damning about the overall quality of his history.  

‘All his comments on these logicians are so conditioned by the prejudices we have enumerated, are written too with such ignorance of the problems of logic, that he cannot be credited with any scientific value. Prantl starts from Kant’s assertion, believing as he does that whatever came after Aristotle was only a corruption of Aristotle’s thought. To bc formal in logic, is in his view to be unscientific. Further, his interpretations, even of Aristotle, instead of being based on the texts, rely only on the standpoint of the decadent ‘modern’ logic. Accordingly, for example, Aristotelian syllogisms are misinterpreted in the sense of Ockham, every formula of propositional logic is explained in the logic of terms, investigation of objects other than syllogistic characterized as ‘rank luxuriance’, and so of course not one genuine problem of formal logic is mentioned. While this attitude by itself makes the work wholly unscientific and, except as a collection of texts, worthless, these characteristics are aggravated by a real hatred of all that Prantl, owing to his logical bias, considers incorrect. And this hatred is extended from the teachings to the teachers. Conspicuous among its victims are the thinkers of the Megarian, Stoic and Scholastic traditions. Ridicule, and even common abuse, is heaped on them by reason of just those passages where they develop manifestly important and fruitful doctrines of formal logic.’

Alex Douglas
8 months ago

I wrote that post very quickly and could have thought it through more. It’s probably the worst post on my Substack. The example about consequence relations was off-the-cuff; I didn’t think the specific example mattered so much.

The lesson, I guess, is that I should never underestimate the capacity of academic philosophers to find the least interesting point to obsess over.

Aaron Garrett
Aaron Garrett
Reply to  Alex Douglas
8 months ago

You’ve learned a valuable lesson!

Alex Douglas
Reply to  Aaron Garrett
8 months ago

Indeed, and I’m always grateful for knowledge!