The Demand for “AI & Philosophy” Hires & Expertise — and Its Precedents


Over 20 jobs have been advertised this season at PhilJobs: Jobs for Philosophers that list among the desired areas of specialization or competence philosophy related to artificial intelligence (AI).

[image made using DALL-E]

There are questions about AI relevant to many subfields of philosophy—ethics, philosophy of science, philosophy of technology, philosophy of mind, philosophy of art, etc. The topic is a hot one in the broader culture owing to the development and popularization of large language models (LLMs) like ChatGPT and other machine-learning-based products and services. Administrators want departments to ride the topicality of the subject in pursuit of enrollments and research dollars. And private industry and government agencies are increasing their funding for AI-related research.

So it’s no surprise that we’d see an increase in the number of philosophy positions to be filled that have something to do with AI. But are there enough people specializing in this area to take up these positions?

Additionally, it seems like more and more is being written on questions concerning philosophy and AI across a range of subfields. So we might ask: are there currently enough experts in these areas for the research being produced to be adequately vetted and peer-reviewed?

These questions were prompted by a reader who, in an email, expressed curiosity as to whether, and if so when and for what areas, demand for philosophical expertise has outstripped supply.

He writes:

I’m wondering whether the apparent AI hiring craze in philosophy over the last few years has any precedence in the discipline. It seems like very few people have written dissertations on AI and yet there have seemingly been more jobs advertised over the last few years for folks working in AI than most other sub-areas. Are people with not-very-extensive training in AI ending up in these positions? That seems highly unusual. Relatedly, a friend of mine with no (direct or even indirect) experience in AI is getting requests from top journals to review AI submissions. Is work in AI really being reviewed by non-experts? That also seems highly unusual. Has anything like this happened over the last 70+ years in academic philosophy? What should we as a discipline think about the current situation either way?

One way to approach the precedent question would be to look through some data (see, for example, this post on areas of specialization from last year’s job market). Another might be to recall which areas of specialization it has been especially difficult to hire in (and when).

Discussion—on both the demand for AI and philosophy experts and the question of precedents—is welcome.

 

 

Horizons Sustainable Financial Services
Subscribe
Notify of
guest

60 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Chaz G Paterfamilias
5 months ago

The field of philosophy of A.I. has been growing in importance and popularity in recent years, as the ethical and philosophical implications of artificial intelligence and related technologies have become increasingly prominent. However, whether there are enough people with specialized expertise in this area to fill all the advertised job positions depends on various factors, including the specific requirements of the positions, the geographical location of the job openings, and the level of expertise and experience that employers are seeking.
Here are some considerations:

  1. Academic Programs: Many universities now offer courses and programs focused on the philosophy of A.I., which can help train a new generation of philosophers with expertise in this field.
  2. Interdisciplinary Nature: The philosophy of A.I. often involves interdisciplinary work, where philosophers collaborate with computer scientists, ethicists, psychologists, and other experts. This can expand the pool of potential candidates for such positions.
  3. Demand and Supply: The demand for specialists in the philosophy of A.I. may vary by location and institution. Some universities and research institutions may be more active in this area than others.
  4. Expanding Interest: As the field of A.I. continues to advance and raise more ethical and philosophical questions, it is likely that more philosophers will be drawn to this area, increasing the pool of potential candidates over time.
  5. Remote Work and Consultation: In some cases, employers may be open to hiring experts as consultants or for remote work, which can widen the pool of available talent beyond the local area.

In summary, while there may not be as many specialists in the philosophy of A.I. compared to more established areas of philosophy, the field is growing, and there are likely enough individuals with the necessary expertise to fill many of the advertised positions. Employers may need to be flexible in their expectations and consider candidates with related expertise who can adapt to the unique challenges of the philosophy of A.I. Additionally, collaboration with experts from other fields may be a valuable strategy for addressing the interdisciplinary nature of the subject.

AI adjacent
AI adjacent
5 months ago

Shameless plug:

This is why a program like the Center for AI Safety’s (http://safe.ai) Philosophy Fellowship program is so valuable, since part of its aim is to train philosophers with a variety of backgrounds, including no background at all, on the current state of AI safety. Programs like these are helping to fill an obvious expertise gap in philosophy.

Patrick Lin
Reply to  AI adjacent
5 months ago

Is that center and program still solvent? I thought they were supported mainly by Sam Bankman-Fried’s FTX Future Fund, which had to cancel most of its grants:

https://www.reuters.com/technology/collapse-ftx-deprives-academics-grants-stokes-fears-forced-repayment-2023-04-06/

Unclear how they’re funded now and at what level. Does anyone know?

William D'Alessandro
William D'Alessandro
Reply to  Patrick Lin
5 months ago

I was a Philosophy Fellow at CAIS (an extremely enlightening and productive experience whose like I wish there were more of). Open Philanthropy came forward to fund the fellowship after the FTX collapse. My understanding is that there won’t be a sequel, sadly, at least in the foreseeable future. I don’t know whether that’s for monetary or other reasons. CAIS’s core operations are still funded (largely by OP and related grantmakers, I think, although they’re looking to diversify their sources).

Patrick Lin
Reply to  William D'Alessandro
5 months ago

Thanks for the insight, William! I had met one of the founders/execs shortly before the FTX collapse, and they looked to be a good outfit. Hope they make it.

Ian Olasov
5 months ago

I’m glad to see a subfield of philosophy thrive, especially one that stands a reasonable chance of influencing other fields and public policy and discourse. To be perfectly honest, though, the sudden uptick in AI jobs feels reactive to a fault. To put it a bit sharply, departments are effectively making multi-year or multi-decade commitments just because a phenomenon that raises some philosophical questions is in the news right now. These questions are important and under-theorized, to be sure! But the fact that departments either don’t otherwise have plans to expand, or are scrapping them in response to some headlines, is not a good sign. At least that’s where my thinking is at now – I’m willing to change my mind if someone shows that’s not what’s really going on.

sahpa
sahpa
Reply to  Ian Olasov
5 months ago

I don’t know much about how decisions get made at the upper levels of a university, but I would expect initiatives that span multiple departments, decades, etc. to not have been ginned up just because AI is ‘in the news right now’. It seems more likely that they’ve been getting signals for a while now (years) that we’re looking at a genuinely significant socio-technological shift.

NDA signer
NDA signer
Reply to  sahpa
5 months ago

After working for years at a small college, most of them as the head of the philosophy program and so looped into administrative decision making more often than I wanted, I can say that, at least in my case, administrators chasing fads in the news was a big part of what they called strategic planning. In our case the fundamental problem was that almost no one in administration had ever been an academic to any degree before becoming an administrator and consequently had very little idea what people worked on or how we worked. Often they made up for it by looking for disciplines showing up in mass media. They also were hyper focused on enrollment concerns and figured that if something was in the news quite a bit then it could be sold to prospective students. If they hadn’t gotten rid of the philosophy program (along with my position) I bet I would have had a meeting this semester about whether there was a way for us to contribute to a certificate program related to AI in conjunction with our computer related programs.
The experience is probably different at larger schools where the bureaucracy is larger and slower to move on things.

Marc Champagne
5 months ago

Supposing that nothing like this job surge has happened in the past, wouldn’t the development match the fact that, when it comes to the advent of AI and machine learning, nothing like this has happened in the past? If AI is on the verge of taking off like a hockey stick graph, then it seems unwise to spread the openings over decades, as with other perennial AOSs. This is a all-hands-on-deck moment, if ever there was one.

Last edited 5 months ago by Marc Champagne
Applied Ethicist
Applied Ethicist
Reply to  Marc Champagne
5 months ago

The talk about a “hockey stick graph” is brilliant marketing hype. It’s a dangerous distraction from issues that actually matter.

We are in an all-hands-on-deck moment, but not about “artificial intelligence.”

https://www.theguardian.com/environment/2023/mar/20/ipcc-climate-crisis-report-delivers-final-warning-on-15c

Kenny Easwaran
Reply to  Applied Ethicist
5 months ago

Climate change just isn’t as closely connected to so many core philosophical issues as artificial intelligence is. Climate change has some important interesting connections to social/political philosophy, ethics, and decision making under uncertainty, but artificial intelligence has all these, and also has important connections to issues like the nature of language, free will, personal identity, consciousness, representation, perception, and others – and the ethical, and social/political issues it touches on are of several different varieties.

Climate change is likely still a more prominent topic of hires for departments of chemistry and urban planning and civil engineering than AI, but philosophy understandably has more to say about AI than about climate change.

G H
G H
Reply to  Kenny Easwaran
5 months ago

I disagree with this. If environmental destruction (not “climate change”) is as bad as the scientific research says it is, then AI is only useful and interesting insofar as it can help us improve the world – including helping policymakers clean it up and find ways to take better care of our planet. Otherwise, it’s just another toy to place on the fire. Another human invention that we played with while distracting ourselves into our own demise.

To me, it is silly to say philosophy has more to say about AI than environmental destruction. That would be yet another reason for someone to say that philosophy doesn’t help with real world problems. Environmental destruction is an existential problem that has already been created. AI is a *potential* problem that we are currently creating, and objectively not as important as our earth. Without humanity, there is no AI. Without our ability to generate electricity, there is no AI. Without mining precious resources from Africa, we don’t have the materials to build computers that run AI.

AI may be more sexy to talk about, but it doesn’t mean there’s more to say about it than environmental destruction. Unplug the proverbial computer and there is no AI. But what still exists? Humanity and their environmental destruction. We can’t unplug our trashed earth and start again.

What it means is that engaging with environmental destruction comes prior to – both morally and logically – discussions of AI. If someone says philosophy has less to say about the planet and our existence than AI, it may just be that person who doesn’t have much to say.

“Climate change just isn’t as closely connected to so many core philosophical issues as artificial intelligence is.”

That’s a barren statement.

Chris
Chris
5 months ago

One possible parallel is with the field of bioethics in late 60s. Many of the leading figures were not trained initially to do bioethics.
For example, Norman Daniels wrote his dissertation on history of non Euclidean geometry, I think. Don Marquis was at Indiana HPS with a dissertation on scientific realism, Dan Wikler did a Phil lang dissertation under Kaplan at UCLA, etc.

Of course it remains to be seen whether philosophy and AI will have the same kind of growth that bioethics did. But as others have noted, there are lots of possible AI adjacent ways into the field, and lots of post docs and funding of institutions for folks looking to make their way into the field.

Seth G
Reply to  Chris
5 months ago

Here’s selfishly hoping this is paralleled in the AI boom and that mental action is sufficient adjacent🤞🏻

Harry
Harry
Reply to  Chris
5 months ago

Mid seventies not late sixties. I think Wikler started his position at Madison in 1976 straight after the PhD. One big difference is that there are now transitional post docs (eg Harvard and MIT) which there weren’t in bioethics at the time. So people came off philosophy of language PhDs and had 6 years to become experts in something else and get tenure!

Jackson Hawkins
Jackson Hawkins
5 months ago

I, for one, am hopeful that the AI revolution will usher in a renaissance in philosophy of mind. Incidentally, Leibniz anticipated many of today’s hotly contested questions -including those surrounding AI – centuries in advance!

Circe
Circe
Reply to  Jackson Hawkins
5 months ago

Renaissance? Philosophy of mind has changed direction over the past couple of decades (less language, somewhat more metaphysicsy, but overall much more empirical than ever). But no one could doubt that it is a thriving area.

Ian
Ian
5 months ago

I’m in languages not philosophy, and I’ve been very amused as I’ve seen 4-5 English department TT job postings asking for a speciality in AI this year.

AI and AI ethics are established realms of inquiry in philosophy, so the recent hiring craze makes some kind of sense in that discipline.

Presumably the English departments are looking for people who work on LLMs in some capacity, and I think there’s interesting work to be done there (although I doubt it will yield much as a speciality). At the same time, since LLMs have not been widely used until ChatGPT went public on the web in early 2023, I fail to see how it could be that there would be a non-trivial number of language scholars working on LLMs less than a year after they came into public consciousness.

I know the grad students out there are typically young and more technologically savvy than I am, but I also know that language departments don’t typically attract techies per se and that enough of us struggle to get Powerpoint or Canvas to do what we want that there are folks who’s jobs it is to help us. So I have a hard time believing that many language students in their second year of grad school in 2019 are downloading and installing GPT 2.

All of this wouldn’t matter much, of course, if it weren’t the case that there are fewer and fewer *other* language jobs every year.

It pays to be clairvoyant, I guess!

SCM
SCM
5 months ago

But are there enough people specializing in this area to take up these positions?”

Who says they have to be people?

Emily Sullivan
Emily Sullivan
5 months ago

Speaking as an editor that handles AI related submissions in philosophy of science for the last 3 and a half years or so, finding qualified reviewers is not something I compromise on. And the other editors I know that handle AI submissions feel the same. Finding qualified reviewers can be a challenge sometimes, but at least in my experience, it is not any more challenging than a number of other subfields in philosophy of science.

Cameron Buckner
Reply to  Emily Sullivan
5 months ago

I can only add that I am extremely grateful to the experts who agree to my review requests on a much more frequent basis than should reasonably be respected!

Richard Y Chappell
5 months ago

Something I find a bit puzzling about the whole situation is why admins seem exclusively interested in new hires as a route to acquiring AI ethics expertise. A neglected alternative (or supplement) would be to offer incentives & support (e.g. research leave) for any of their existing ethicists who would be open to re-skilling / branching into that area.

AI adjacent
AI adjacent
Reply to  Richard Y Chappell
5 months ago

It is puzzling.

At least there’s private opportunities like those from CAIS mentioned above and also from Mellon: https://www.mellon.org/article/new-directions-fellowships

sahpa
sahpa
Reply to  Richard Y Chappell
5 months ago

Shouldn’t we be glad for the new tenure lines?

wondering
wondering
Reply to  sahpa
5 months ago

Curious about a presupposition in this question: are these tenure lines only being created because someone somewhere wants an AI-oriented philosopher, or would they be created even in the world where Richard’s suggestion is followed and people who already have a job are given the opportunity to build an AI specialization? If the latter, then one might think it would be better to opt for Richard’s approach since that would allow a wider variety of job-seekers to be considered for jobs (yes, I’m assuming that even if we are trying to improve the discipline’s job market situation, filtering of candidates via a niche specialization is undesirable).

Gorm
Reply to  wondering
5 months ago

Wondering
At some state colleges, they are not going to open a line if the existing faculty claim they can cover a topic. The admin. will say, great! Now we just saved some money!
By the way, Richard has no grounds for complaint. Part of the notion of academic freedom allows faculty to pursue whatever research they want. So if I were hired for a bioethics job, I could then just start publishing in Phil of AI – I may still have to teach my bioethics courses, but that is a different matter

Richard Y Chappell
Reply to  Gorm
5 months ago

Just to clarify: I’m not complaining! Just noting a potential inefficiency / missed opportunity for institutions that want this, given that demand seems to be outpacing supply at present.

Applied Ethicist
Applied Ethicist
5 months ago

Unusually large parts of the earth’s surface have recently been either melting or burning. World War III could start soon. (It may have already started.) It’s more pleasant to speculate about hypothetical threats to humanity than to address actual, imminent threats. It’s more fun to play with new toys than to face reality.

No doubt machine learning will have significant economic and social effects. No doubt philosophers who understand both the underlying computer science and the underlying economics can contribute something valuable to our collective response to these new technologies. It is absurd for universities to treat “artificial intelligence” and technology ethics as the most socially important areas in which to build philosophical expertise.

The biggest threats to humanity are climate change, the social instability associated with rising economic inequality, and authoritarianism, which makes the first two issues much more difficult to address. Machine learning may be contributing to all three of these threats, but it is not the primary driver. Philosophers with expertise in distributive justice and democratic theory are well equipped to address these three issues.

It would be bad for humanity and for our discipline if hiring in technology ethics were to displace hiring in traditional political philosophy. To the extent that hiring in philosophy is directed toward addressing current social issues, tech ethics hiring should focus on real issues (like the effects of LLMs on employment) and not on speculative issues (like the imagined risk of a paperclip apocalypse).

It would also be bad for humanity if anxiety about a new technology leads to the creation of regulatory barriers to entry that entrench established players and enable them to engage in rent-seeking.

Richard Y Chappell
Reply to  Applied Ethicist
5 months ago

fwiw, I think there’s plenty of room for reasonable disagreement about “the most socially important areas in which to build philosophical expertise”, and I’d find it frankly baffling if someone’s philosophical priorities were determined by what they found “pleasant” to think about. (My own experience suggests the opposite: I’m often especially motivated to engage with ideas and arguments that I find either annoying or troubling in some way.)

But just to register two points of disagreement on the first-order issues:

(1) Commonly-expressed dismissive attitudes towards the uncertain risk from AI seem radically overconfident. The empirical situation surely warrants significant uncertainty about the possible risks, which in turn warrants non-trivial investment in risk mitigation (much as the uncertainty in climate models provides more, rather than less, reason to invest in risk mitigation).

(2) I don’t know whether AI is the most important issue of our time, but it is certainly not “absurd” to think so. At the very least, I’d think that a significantly stronger case could be made for it than for previous academic fads. (But it’s not as though academic hiring tracks social value in general.)

Last edited 5 months ago by Richard Yetter Chappell
Applied Ethicist
Applied Ethicist
Reply to  Richard Y Chappell
5 months ago

The probability that climate change will cause human extinction is much higher than the probability that “artificial intelligence” will cause human extinction.

Maybe we should invest in addressing small existential risks. It is absurd to make a small existential risk our top priority when we face a much larger existential risk.

Last edited 5 months ago by Applied Ethicist
GradStu
GradStu
Reply to  Applied Ethicist
5 months ago

I wouldn’t really have a clue how to estimate these probabilities, myself. Would you mind to share how you did?

Applied Ethicist
Applied Ethicist
Reply to  GradStu
5 months ago

If you don’t have a clue how to determine that the actual problem of climate change is more serious than the hypothetical problem of a paperclip-maximizing robot, you really are clueless.

Or perhaps you are shilling for the fossil fuel industry, which is the real-life equivalent to the paperclip-maximizing robot we need to figure out how to shut off.

Eric Steinhart
Reply to  Applied Ethicist
5 months ago

GradStu didn’t say anything about paperclip-maximizing robots. Climate change may indeed be the much greater danger (I myself think it is), but GradStu raised a legitimate question, and deserves a reasoned answer rather than an ad hominem insult.

Richard Y Chappell
Reply to  GradStu
5 months ago

Sorry you got such an unhelpful reply from AE. It’s a perfectly reasonable question.

The Halstead report is the most detailed investigation of extinction risk from climate change that I’m aware of. The author notes: “I construct several models of the direct extinction risk from climate change but struggle to get the risk above 1 in 100,000 over all time.”

(There are obviously very significant risks from climate change that are far more likely to eventuate, but they fall well short of literal extinction.)

Extinction risk from AI seems much harder to quantify. One striking point of contrast is that a great many AI experts rate the risk at over 1% (with even double-digit percentages not uncommon).

Since no climate experts (as far as I’m aware) think there’s anything remotely like that level of extinction risk from climate change, there would seem a fairly straightforward sense in which deferring to the distribution of expert views should lead us to view AI as a much greater extinction risk than climate change.

(Of course, that’s not the same thing as being the greater problem. There are many different questions in this vicinity — which is more neglected, which more tractable, etc. — and they needn’t all have the same answer. I’m sure you’re aware of this, but I flag this for AE’s benefit.)

Applied Ethicist
Applied Ethicist
Reply to  Richard Y Chappell
5 months ago

You will probably be annoyed with me for making another ad hominem. But here it is necessary.

John Halstead, the author of the “Halstead Report” cited above, is a political philosopher, not a climate scientist. The “report” is not a piece of peer-reviewed research. It’s a blog post. Its first sentence announces that it is assessing climate risks “from a longtermist point of view.” Longtermism is a fringe intellectual movement with a money behind it.

Climate scientists have been highly critical of longtermists’ optimism about climate change. Here is a critical review, published by Bulletin of the Atomic Scientists (the organization that hosts the Doomsday Clock and which was founded by Albert Einstein and former Manhattan Project scientists):

https://thebulletin.org/2022/11/what-longtermism-gets-wrong-about-climate-change/

Here are some mainstream sources discussing the possibility of human extinction as a consequence of climate change:

https://www.cbsnews.com/news/new-climate-change-report-human-civilization-at-risk-extinction-by-2050-new-australian-climate/

https://www.bbc.com/news/science-environment-62378157

The article referenced in the BBC story was published in the Proceedings of the National Academy of Sciences, which is a one of the most highly-cited peer-reviewed scientific journals. From the conclusion: “There is ample evidence that climate change could become catastrophic. We could enter such ‘endgames’ at even modest levels of warming.”

https://www.pnas.org/doi/full/10.1073/pnas.2108146119

If you are not aware of climate scientists who think climate change presents a substantial risk of human extinction, you are not aware of what climate scientists think.

Richard Y Chappell
Reply to  Applied Ethicist
5 months ago

(1) My understanding is that the Halstead report was reviewed by several climate experts for accuracy, though of course it is an independently published “report” and not a journal article, which is why I described it as I did.

In any case, my reason for sharing it was for its content (which GradStu sounded interested in reviewing for themselves). Anyone can check the full report for themselves and judge whether they think it is accurately characterized as a “blog post”.

(2) Your “critical review” was not by climate scientists, but by Emile Torres, an anti-EA partisan notorious for misrepresenting their targets. Their Bulletin piece itself contains blatant misrepresentations, as Halstead explains here.

(3) Nothing in the links you provide show that any experts think there’s even a 1% chance of human extinction from climate change. They support investigating further. I support that too.

Applied Ethicist
Applied Ethicist
Reply to  Richard Y Chappell
5 months ago

(1) The “Halstead Report” did not receive anything resembling normal peer review. It does not engage with the academic literature in a normal way. Some quotations in the comments posted on the “Report”:

Gideon Futerman: “You don’t seem to engage with much of the peer-reviewed literature already written on climate change and GCRs.”

A. C. Skraeling: “If you have refuted arguments, is it not academic best practice to cite the papers you respond to? In any case, if you know of and have read the papers, are we to understand that you believe many (if not most) peer-reviewed papers on Global Catastrophic and Existential climate risk are not worth mentioning anywhere in 437 pages of discussion?”

John G. Halstead (the “Report’s” author): “I’d say the depth of review was similar to peer review yes, though it is true to say that publication was not conditional on the peer reviewers okaying what I had written.”

(2) Torres is not a climate scientist, but he quotes several regarding the optimistic claims presented in William MacAskill’s What We Owe to the Future (which draws on Halstead’s “report”). Several commented on MacAskill’s optimism about the resiliency of agriculture in the face of climate change (which Halstead’s “report” also expresses). They call MacAskill’s argument “nonsense,” “complete nonsense,” “silly and simplistic,” and “bizarre and Panglossian at best.”

(3) The PNAS article doesn’t attempt to put numbers on risk. The conclusion sounds like a whole lot more than a 1% risk to me: “There is ample evidence that climate change could become catastrophic. We could enter such ‘endgames’ at even modest levels of warming.”

UN Secretary General António Guterres has called climate change an “existential threat.” He is not a climate scientist, but if world leaders are talking about climate change as an existential risk, that might be some indication that we should see it as such.

Last edited 5 months ago by Applied Ethicist
GradStu
GradStu
Reply to  Richard Y Chappell
5 months ago

Thank you! This is really helpful. I think trying to ascertain the probabilities is super interesting, so I appreciate the sincere reply. Hopefully with all the new work going on we’ll be able to get clearer on the AI extinction risk probabilities so we can determine how best to divide our efforts between climate change and AI risks!

Circe
Circe
Reply to  Applied Ethicist
5 months ago

The world is burning. It sucks. It sucks that many people don’t care. But I don’t see that being a philosophical issue in the way AI is. That said, I agree AI is overhyped and the number of hires being made in this area is laughable.

sahpa
sahpa
Reply to  Applied Ethicist
5 months ago

I recommend you go look at what applied ethicists of AI actually work on. Existential risk is only a small part of AI ethics, and many view it as on the (problematically) speculative end of things. AI ethicists are very much interested in the social disruptions, inequalities, and threats to democracy that you are so concerned with.

Shelley Lynn Tremain
Shelley Lynn Tremain
5 months ago

I think it would be very good to know the extent to which this upsurge in job postings with respect to AI tracks or aligns with certain already extant identities and demographics in philosophy. AI is saturated with ableist, racist, sexist, classist, and other biases. Hiring in philosophy reproduces ableist, racist, sexist, and other biases and forms of power. To what extent is the upsurge in philosophy jobs in AI a reformulation and re-entrenchment of these forms of power in the discipline and profession?

Just to remind you: disabled philosophers are still excluded from the profession and there has never been a job posting in philosophy with an AOS in philosophy of disability despite the fact that I and others have produced research in the area for years and research and teaching in critical disability studies is thriving elsewhere across the academy.

But don’t take my word for it! Check out _The Bloomsbury Guide to Philosophy of Disability_, the groundbreaking collection that I edited which will be published on December 14th and launched at Philosophy, Disability and Social Change 4 on the same day. For more information, keep checking out my posts at BIOPOLITICAL PHILOSOPHY!

Khalid
Khalid
5 months ago

Any cognitive philosopher, in particular, is rather qualified for the position discussed.

David Wallace
5 months ago

Hirings don’t just reflect the importance of an issue, but how much scope there is for interesting philosophical contributions. I think nuclear war is a more serious risk to civilization than AI, but the basic contours of the problem haven’t changed that much in sixty years and arguably the low-hanging fruit has been picked. Climate change has more prospect, but it’s been a very active research field for decades. AI (in the current sense) is both important and new; plausibly there’s a lot of potential for important contributions there.

Shelley Lynn Tremain
Shelley Lynn Tremain
Reply to  David Wallace
5 months ago

Climate change has been an active research field for decades in a distinctly circumscribed (and limited) way in philosophy and elsewhere in the university. The immense amount of emerging work on climate change that connects it to systems of oppression, to disability, to migration, to settler colonialism, to indigeneity, to poverty, to capitalism, and so on, is both vital and marginalized, if not excluded from uptake in philosophy. David’s remarks provide more evidence for my suggestion above, namely, that what coalesces as an AOS in philosophy, and is regarded by mainstream philosophers as an AOS that demands to be developed in philosophy, should be recognized as reformulating and reinstating current relations of power in philosophy and society more broadly.

Matt L
Reply to  Shelley Lynn Tremain
5 months ago

For what it’s worth, I can say that work on climate change and migration is certainly not marginalized in philosophy. I’ve written on it myself, with one paper on the topic being among my most cited, and lots of other people – coming from both backgrounds writing on migration, and on climate change – have written and are writing a lot on it. I get referee requests all the time for papers from good journals on the topic (I need to finish one today, in fact!) There are a wide variety of perspectives (many ofwhich also touch on indigeneity, capitalism, etc. to greater or lesser degrees), and many different competing proposals. If anything, I’d say that I fear the topic is close to over-saturated, but it’s certainly not marginalized.

Shelley Lynn Tremain
Shelley Lynn Tremain
Reply to  Matt L
5 months ago

Your remarks are somewhat unexpected, Matt. Indeed, Kyle Whyte, an indigenous philosopher who works extensively on climate change, would likely disagree with you regarding the abundance of attention you claim that indigeneity and settler colonialism are given in dominant philosophical analyses of climate change. Consider this excerpt from my chapter in the forthcoming _The Bloomsbury Guide to Philosophy of Disability_ in which I cite Kyle (at length) along with several other indigenous authors that he cites:

“No American or Canadian settler, Whyte states, has offered “an imagined projection of a climate future that is more ecologically dire than what Indigenous people have already endured due to colonialism” (Whyte 2019). Indeed, settler analyses of climate change rarely make the connections between climate change, ecological destruction, and settler colonialism. Yet, as Whyte explains, “The infliction of harmful environmental changes on Indigenous peoples have been an integral part of settler colonialism in the United States” (Whyte 2019).

The parenthetical citation here is to a guest blog post that Kyle Whyte did for BIOPOLITICAL PHILOSOPHY. I can provide you with the other references to their work that I use in the chapter if you are interested.

Last edited 5 months ago by Shelley Lynn Tremain
Matt L
Reply to  Shelley Lynn Tremain
5 months ago

Hi Shelley, I think you’ve miss-read or misunderstood my comment. I was focusing on the “migration” bit out of the list of topics you gave in your first comment. I focused on that because it’s something I myself work on, and I can say from first hand experience that it’s not a marginalized topic at all. I also noted that some of the papers touch on (not focus on – they are foucsed on climate change and migration) indiginaity, capitalism, and some other topics. That’s clearly true. That’s compatible with the other topics you mention not receiving enough attention. I don’t have a strong opinion about that, because I don’t work on those topics. But the topic I do work on – migration – is not at all marginalized in relation to climate change within philosophy.

Shelley Lynn Tremain
Shelley Lynn Tremain
Reply to  Matt L
5 months ago

Hi Matt, I think you may have underestimated me.

Nick Byrd
5 months ago

Without any AI publications, I applied for and accepted a TT offer for a position advertised as including research areas not limited to “ethics of artificial intelligence, ethics of information, ethics of robotization, biotechnological ethics, ethics of technological business, and ethics of algorithms” (https://philjobs.org/job/show/14454).

I still haven’t published on these topics, but I teach about some of them (which has not been hard to prep.).

I’ve also received unsolicited requests to subcontract and consult on federal AI proposals.

So not all openings that mention research specialization in AI actually require it. (As the post indicates, (1) the call for specialization in AI may be more aspirational; it may also just be that (2) some schools have to describe openings to administrators as involving AI in order to get new philosophy lines approved in an age where new TT philosophy lines may not otherwise be approved).

Last edited 5 months ago by Nick Byrd
On the Market
On the Market
Reply to  Nick Byrd
5 months ago

Thanks for this comment! I have a very skeptical view of the AI craze, but have considered applying to these posts based on my phil of mind / language work. Knowing that you don’t need to have published on AI specifically is helpful.

Cameron Buckner
Reply to  On the Market
5 months ago

There are so very many interesting and pressing philosophical questions surrounding recent developments in AI, most of which have nothing to do with singularity or existential threat speculation. I beg you to not hate it just because it’s marketable. Cappelen & Dever have been busily connecting these issues to traditional philosophy of language if you want some place to look, and the connections to traditional philosophy of mind are obvious. Feel free to contact me (anyone) if you want specific ideas about connections to your own research interests…no matter what they are I’m almost certain that I can come up with some recent developments that cry out for philosophical evaluation and reflection. There’s also so much low-hanging fruit, where the engineering is getting out way ahead of philosophical guidance, that any re-engagement with familiar philosophical lessons, probably applied in a new context, could be extremely helpful.

wondering
wondering
Reply to  Nick Byrd
5 months ago

Thanks for this info Nick, just a quick follow-up: when you applied for this job, how did you frame yourself in relation to these research areas, e.g., did you say something like “I don’t have any extant research on this but I’m in the process of building a competency, etc.”?

Michael Brent
5 months ago

I can’t speak to precendence here in previous hiring practices, but as someone with a PhD in Philosophy who works directly in what industry folks like to call “Responsible A.I.” (yes, yes, I know), I can say that the demand for expertise in this area is only growing stronger, both in industry, broadly construed, as well as public policy and law. Good time to get involved.

Shay Allen Logan
5 months ago

An undercurrent to OP’s comment is that this hiring is explained by departments that would have hired in x instead deciding to instead hire in AI. But my guess is that much of it isn’t like that at all. To get approved for a hire, one must get admins on board. Right now, if I approach my dean with a plan to hire in something AI adjacent they will be vastly more receptive than if I approach them with essentially any other bit of philosophy.

All that to say that (1) much of this is hiring that wouldn’t happen otherwise and (2) that’s at least as much a result of nonphilosophers being interested in AI stuff as anything else.

Regina Rini
Regina Rini
5 months ago

The best historical parallel is to the takeoff of Bioethics in the 1970s. Sudden technological changes – organ transplantation, in vitro fertilization, early work on gene mapping and manipulation – got media attention and made people worry. Deans and Philosophy departments responded by quickly hiring people who could teach in the area, only some of whom were fully qualified straight out of the gate. For awhile there was demand in excess of supply.

Skip forward 50 years, and what has happened to the Bioethics craze? Did it disappear completely? No. It stabilized. Most large Philosophy departments still expect to have at least one person on staff who can teach Bio/Medical Ethics. Philosophers train for it much earlier in their careers, so there is no longer demand in excess of supply (the opposite, sadly, like almost everywhere else in the Philosophy job market).

That’s the likely trajectory for AI Ethics. For the next 5 years or so there won’t be enough trained philosophers to meet the social demand. But eventually there will be. And most large departments will expect to have at least one person on staff who can teach AI Ethics for decades and decades to come.

Shelley Lynn Tremain
Shelley Lynn Tremain
Reply to  Regina Rini
5 months ago

Regina,
you have articulated the official story that bioethicists like to tell about their field but I don’t think you should (continue to) accept it or promote it uncritically. Another, more critical and more politically informed and invested version of the history and development of bioethics can be found in (for instance) my work, especially the 5th chapter of my monograph, _Foucault and Feminist Philosophy of Disability_ and in my chapter in the forthcoming _The Bloomsbury Guide to Philosophy of Disability_ as well as in various articles. To make a long story short for the purposes and brevity of this thread, bioethics is a technology of government, a mechanism of eugenics. As Melinda Hall, also a disabled philosopher of disability puts it, “bioethics has always been eugenic.” Eugenics is at the heart of bioethics.

As I have argued in various posts at BIOPOLITICAL PHILOSOPHY (and in my forthcoming chapter), bioethics should be abolished. The predominance of bioethics in Canadian philosophy departments is directly related to the dearth of disabled philosophers in Canadian philosophy. If the view in your department is that bioethicists are essential, then it is not surprising to me that your department repeatedly rejects job applications from me and the other disabled philosophers of disability that apply to it.

Last edited 5 months ago by Shelley Lynn Tremain
cecul burrow
cecul burrow
Reply to  Shelley Lynn Tremain
5 months ago

Why should we accept the views of Melinda Hall and promote them uncritically? It is far from clear that they falsify the standard narrative.

The claim that bioethics = eugenics isn’t even really intelligible on the surface. Any sort of inquiry into ethics in medicine and how to justly distribute medical goods = eugenics? It’s hard to see how that could be true.

Cece
Cece
5 months ago

There are not enough actually qualified people. So the people presumed capable (who have “potential”!) will get the jobs—namely white men with a bit of blagging ability.

What am I even complaining about? Women and minorities will still get hired in the undervalued and underfunded areas! We’re committed to inclusion!

Cameron Buckner
Reply to  Cece
5 months ago

FWIW, a lot of women have been hired into these jobs and women are doing some of the best work in this area. Just off the top of my head, look at the work of Kathleen Creel, Emily Sullivan, Rosa Cao, Lisa Miracchi Titus, Lena Kästner, Atoosa Kazirsadeh, Marta Halina, Karina Vold, Gabbrielle Johnson, Catherine Stinson, Ali Boyle and Regina above (I can’t remember if her job was advertised AI… 🙂 ). More senior figures are also doing serious work on recent developments, like Laurie Paul, Anna Alexandrova, and Alison Gopnik. Many also go to lucrative and influential jobs in industry or think tanks, like Amanda Askell or Jess Whittlestone. Granted, we’re still far short of demand, but a lot of these people are now teaching graduate courses on the topic, and we will see a ton of great women graduate with relevant dissertations graduate in the next five years. At least, all of the women I know who made the wise (and very interesting) decision to tool up on AI had great results on the job market. Will all the jobs go to people who are qualified? Probably not! Will all the women who become qualified people get good jobs? At least for a while, it sure looks like the answer is yes.