Progress & New Ideas in Philosophy

“In what state do we find the research produced in academic analytic philosophy?… Things are better than they’ve been for eighty years or so.”

[“Tartan Ribbon”. Photo by Thomas Sutton, method by James Clerk Maxwell]

That’s Richard Pettigrew, professor of philosophy at the University of Bristol, pushing back against pessimism about philosophy (see here for example, or here) in a recent post at his blog.

Why think, as he does, that philosophical research today is of “very high average quality”? He says:

Partly, that’s because analytic philosophers are more often drawing on, incorporating, and showing proper respect towards the insights of traditions with which they’ve less often interacted in the past, taking their arguments and frameworks into account in the way they’ve long taken scientific findings and arguments and frameworks into account: traditions such as classic Chinese philosophy, critical theory, black feminist thought, and many more. Partly it’s because those who got their doctorates in the past decade or so have brought a slew of new questions to the table. And partly, I would say, it’s because the work done using the methodology the discipline has always used and on the questions it’s always tackled is currently extremely well done.

Pettigrew has some interesting defenses of some of the aspects of philosophical research we tend to hear a lot of complaints about, such as referee-proofed papers and epicyclical papers (“the sort of paper that takes a previous attempt to answer a question, notes that it doesn’t quite work, perhaps because it fails to cover a particular case that’s offered as a counterexample, and then proposes a new answer that is very closely related but incorporates some small amendment that allows it to account for that case”).

What was of particular value in his post, though, were his reflections on what, in the areas in which he’s an expert, strike him as philosophical progress and philosophical novelty.

Consider this an invitation to share your thoughts about progress or novelty in your areas of specialization.

Pettigrew writes:

Progress: is philosophy making any? We certainly aren’t establishing many unconditional facts with certainty, as we might think mathematics is, nor even with high credence, as you might think biology or ecology or linguistics is. But then it’s hard to see how we ever could. What premise that could ground a philosophical argument attracts widespread assent? There are no undisputed foundational premises in our discipline. And yet I think we are making profound and substantial progress in understanding issues.

Take the area I know best, which is epistemic utility theory or accuracy-first epistemology. The idea is that the sole fundamental source of epistemic value is accuracy or proximity to truth, and all norms that govern what we should believe can be derived from this. In the last twenty five years we’ve come to understand a huge amount about which norms might be established in this way and which might not, and that in turn has shed light on these norms, because one comes to look differently at a norm of belief when one sees that it can’t be shown to further the goal of accuracy.

Or take another area about which I’m beginning to learn: welfare ethics. Building on Harsanyi’s groundbreaking theorem, and proceeding by adding what we might call epicycles, we’ve gained remarkable insight into what principles about group decision-making under uncertainty one must reject if one wishes to reject utilitarianism. We now know in much greater detail the costs incurred by egalitarianism and prioritarianism, for instance.

In neither of these cases—accuracy-first epistemology or welfare ethics—have we established any unconditional facts with any degree of certainty. It is always open to people to reject veritism and it is open to them to reject the principles that give utilitarianism, and it’s hard to see how we could ever show they are irrational for doing so. In the former case, though, we’ve learned what we get if we accept it, and in the latter we’ve learned what we must be prepared to sacrifice if we reject utilitarianism. And this is progress. It’s substantial progress, I think. It’s the sort of progress we can make on these questions.

Finally, novelty: is there enough? Perhaps different areas have different amounts, but in epistemology, I’d say there’s a great deal, much of it generated by the very people who are typically taken to be under pressure from the publication system to keep ploughing the same furrows as their predecessors, namely, people who completed their doctorates in the past ten years and don’t yet have tenure, if they’re in the US system. It’s awkward to name names because it sounds like you think those you don’t list aren’t innovative, so let me be clear that this is entirely off the top of my head and reflects more what I’ve been working on myself recently than any judgment about people I don’t list. But let me mention Julia Staffel’s work on degrees of irrationality and transitional attitudes, Rima Basu’s work on the moral dimension of beliefs, Georgi Gardiner’s work on attention, Jane Friedman’s work on inquiry, Kevin Dorst’s work on rationalising apparent irrationalities, Amia Srinivasan’s work on identifying political implications of what seem like purely theoretical epistemological positions, and almost anything Rachel Fraser writes, but particularly her work on narrative testimony. Many of these are opening up whole new research programmes. I can’t think of another ten year period since the 1940s in which epistemology welcomed so many new avenues for research.

Pettigrew’s whole piece is here.

What are some examples of progress and new ideas in the areas of philosophy you work in?


Notify of

Newest Most Voted
Inline Feedbacks
View all comments
David Wallace
28 days ago

In philosophy of physics (with the caveat that this is focused on the areas I’m active in, and obviously builds in my biases):

– philosophy of quantum mechanics remains very focused on the measurement problem, as it has been since the 1990s. The debate between interpretations reached a bit of an impasse around ten years ago but is starting to move again, with a lot of focus on the methodological and philosophy-of-science presuppositions behind various approaches. It’s become increasingly clear that the different approaches to solving the measurement problem are associated with very different takes as to what it is to interpret a scientific theory and what scientific realism even is – the divorce between philosophy of the specific sciences and general philosophy of science is being reversed in contemporary philosophy of quantum mechanics. Scientific realism is less unquestioned than it used to be and people are more willing to explore ways to preserve the quantum formalism, even at the cost of realism. And of course – here I indulge my own preferences – the Everett (many worlds) theory has moved from being reflexively dismissed as science fiction, to being taken very seriously as a contender for how to understand physics.

– philosophy of spacetime has largely merged with philosophy of symmetry. The subject has become a lot more pluralistic as to what mathematical formulations of a theory are optimal, and the hole argument (which was quiescant in the 2000s) has returned as a major area of attention since Jim Weatherall’s intervention. The dynamical approach to spacetime structure advocated for many years by Harvey Brown has become much more directly engaged with by people in the field, especially since Eleanor Knox’s spacetime functionalism.

– philosophy of statistical mechanics, which is really only thirty years old as a serious area of specialization in philosophy of physics, has established probability and the arrow of time as its central organizing problems (along with inter-theoretic reduction, which has a longer pedigree). The (so-called) Boltzmannian approach become dominant in the early 2000s but is arguably falling from grace, with more interest in the (so-called) Gibbsian alternative, and more willingness to question that whole Boltzmann/Gibbs dichotomy, both as history and as an organizing principle for the field.

– philosophy of quantum field theory, which is scarcely twenty-five years old, has broadened a lot from its earlier focus on algebraic quantum field theory. There is still excellent work in philosophy of algebraic quantum field theory but people (including algebraic fans) are more aware of its limitations and there is more attention to the sort of quantum field theory that mainstream physics uses, and to the relations between quantum field theory and statistical mechanics/condensed matter physics. The history of quantum field theory is a burgeoning area in HPS.

– philosophy of astrophysics/cosmology and philosophy of quantum gravity have established themselves as substantive sub-fields.

– more generally, when I entered the field twenty-plus years ago I’d say that it was (a) very committed to the centrality of mathematical rigor, and (b) to a quite large extent the philosophy of *dissident* physics. For better or worse (I’d say:better) the field has weakened in both (a) and (b) and is much more in communication with physics proper than it used to be. 

These are good times for a young subject. Philosophy of physics is making lots of progress and spawning fruitful conversations with metaphysicians, epistemologists, and physicists.

27 days ago

Is it progress, I do not know, but there is a lot of interesting work being done on scientific practice. Of course we have people like Hasok Chang, Sabina Leonelli, Rachel Ankeny, and Hanne Andersen to thank. They were instrumental in getting the Society for Philosophy of Science in Practice going. The area I am most familiar with is the vast (interesting) literature on scientific collaboration and authorship. Hanne has investigated the effects of inter-disciplinary research in science on authorship practices (and the normative implications). Others, like Quill Kukla, et al. have examined very large “team” projects, where authors are widely distributed geographically, and the challenges this raises for securing our knowledge claims. Josh Habgood-Coat has made a very novel proposal about re-conceptualizing scientific authorship along the lines of movie credits, where every contributor is acknowledged. Haixin Dang is doing interesting work on collective authorship. Liam Kofi-Bright and Remco Heesen have raised concerns about refereeing in science. I have been doing work on retractions in science, in collaboration with Line Andersen. Importantly, we are all trying to come to grips with the many normative issues that affect scientists in their pursuit of knowledge. Clearly, some progress has been made.

Preston Stovall
Preston Stovall
21 days ago

I was hoping there would be more examples given here. My experience confirms Pettigrew assessment – the literature is filled with top-notch research today, and it’s an exciting time to be reading philosophy. I’ll mention two areas I’m familiar with that are breaking new ground and paving the way for further work: one at the boundary of cognitive science and the philosophies of language and mind; the other in proof-theoretic semantics.

Philosophers and cognitive scientists today are drawing together a synthetic account of human language use, human norm psychology and our ability to frame and follow rules, human capacities for higher-order and shared mental states, and the construction of common points of view and cumulative cultural development. If we focus only on booklength discussions from the last ten years, in the scientific literature this includes: accounts of the development of natural language on the basis of theories about human sociality, human emotion, and higher-order thinking in Dor (The Instruction of Imagination) and Jablonka and Lamb (Evolution in Four Dimensions); Mercier and Sperber’s theory that human reason is a cognitive module selected for not in the interest of individual ratiocination, but on the basis of its role in public disputation over the reasons we have (The Enigma of Reason); and Michael Tomasello’s explanation of the evolution and ontogeny of human cognition on the basis of the evolution and development of more sophisticated kinds of shared intentionality (Becoming Human).

Turning to the philosophies of mind and language, and again considering just books published in the last decade, this scientific research is being drawn on by figures like Ladislav Koreň (Practices of Reason), Mark Okrent (Nature and Normativity), Jaroslav Peregrin (Normative Species), Joseph Rouse (Social Practices as Biological Niche Construction), Stephen Turner (Cognitive Science and the Social) and myself (The Single-Minded Animal); in The Language Animal, Charles Taylor argues that a social-practice account of human cognition along these lines develops out of 17th, 18th and 19th-century British and German philosophy, and makes its way into 20th and 21st century philosophy in the work of Wilfrid Sellars and Robert Brandom.

Additionally, research in proof-theoretic semantics over the last decade has begun to consolidate around a research paradigm that offers a stark alternative to the model-theoretic representational semantic paradigm that coalesced around work taking place in the philosophy and linguistics at UCLA in the late 1960s and early 1970s, where at the time you had Rudolph Carnap, Alonzo Church, Keith Donnellan, Donald Kalish, Hans Kamp, David Kaplan, David Lewis, Richard Montague, and Barbara Partee. Montague’s use of Church’s type theory and the lambda calculus allowed him deploy Carnap’s intensional semantics for interpreting natural language sentences. As this program caught on in places like MIT, Harvard, and Princeton in the 1970s and 1980s, and with the apparent need for so-called “hyperintensional” theories of meaning investigated in the 1990s and 2000s, today this paradigm remains the dominant across the philosophies of language and mind, linguistics, and related areas in the philosophy of perception and cognitive science.

What proof-theoretic semantics illustrates, however, is that much of this work, and the very need for “hyperintensions”, is an artifact of the notion of “intension” that Carnap introduced when he adopted Leibniz’s term as an Ersatz for Frege’s notion of sense, understood in terms of functions from state descriptions (world) to extensions. See my “Essence as a Modality” for an indication of how blinkered some of the contemporary debate has been by what we might call the UCLA-MIT approach toward linguistic meaning. Nissim Francez’s Proof-Theoretic Semantics shows that a Montagovian approach toward linguistic meaning can be founded on proof theory, and he illustrates how to carry out such a program for a range of classical and non-classical logics, and for a fragment of natural language. Jaroslav Peregrin’s Inferentialism offers an overview of a program of this sort, and Ulf Hlobil and Robert Brandom’s Reasons for Logic, Logic for Reasons will be another source of insight into this tradition. I would also recommend essays by Derek Baker and Jack Woods, Matej Drobňák, Luca Incurvati and Julian Schlöder, Bartosz Kaluziński Jon Litland, and Ryan Simonelli. It’s an exciting time to be working on foundational issues in the theory of meaning.

Peter Jones
21 days ago

I feel that what Pettigrew calls progress is profoundly unimpressive. More like fiddling around at the margins. Philosophy in the academic world is a sorry affair. It seems most of those working in the field have given up on solving any important problems.It’s an odd thing that these problems don’t show up in the Perennial philosophy, but it seems most philosophers prefer to bang their heads against a brick wall and wonder why they cannot make any progress than study this.