Ubermess: Philosophers Discuss the Self-Driving Car Crash


On March 19, 2018 a self-driving Uber car-for-hire struck and killed a pedestrian, Elaine Herzberg, in Tempe, Arizona. This was not the first trouble Uber has had with its self-driving cars, nor was it the first fatal crash involving a self-driving car (for example).

Philosophers have been taking up questions regarding the ethics of autonomous vehicles in various contexts for some time now. Last week’s crash gave the topic some added visibility, and several philosophers published work in popular venues on the subject.

They include:

If you know of other work by philosophers related to the Uber crash, please share it in the comments. Thanks.

(Thanks to Patrick Lin for the pointers.)

Warwick University MA in Philosophy
Subscribe
Notify of
guest

7 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Shawn
Shawn
6 years ago

Ryan Jenkins (Cal Poly) and Jesse Kirkpatrick (George Mason), “What the Fatal Uber Crash Doesn’t Tell Us About Self-Driving Cars”
https://slate.com/technology/2018/03/what-the-fatal-uber-crash-doesnt-tell-us-about-self-driving-cars.html

Richard Russell Wood
Richard Russell Wood
6 years ago

I think it is incumbent on philosophers to bring to the attention of the public (whose roads are being used as AI laboratories) the long-range implications of the development of AI to our lives and culture, not simply in terms of programmed “value” decisions, but, should AI ever become “strong (enough)” , dynamic ethical decision – making.
As readers here are well aware, decisions in the case of even simple “Trolley Problem” gedanken experiments defy consensus among even the most earnest and well-intentioned thinkers.
My fear is, without engaging the public, and “democratizing” the directions that AI technology is leading us,, the people who ultimately make those decisions for society will be those who stand to gain the most monetarily, whether entrepreneurs, researchers, or academics.
These issues will ultimately impact all of our lives, and we meat machines – all of us – need to decide what we are willing , or should appropriately cede, to robots.
This is one area where philosophers can continue to stay relevant in all of our lives, despite brickbats from some of the more skeptical scientists and technocrats.
Full Disclosure: I am not a professional philosopher. I studied some philosophy – piccolo – but it was at Pittsburgh so I figure that counts double. 😉

Thinker
Thinker
6 years ago

I find it absolutely bizarre that in all of these discussions of AI and “self-driving” cars, Hubert Dreyfus is never mentioned among the lists of philosophers said to work on such issues.

I can’t for the life of me figure out why, but ostensibly it seems like people stopped reading his work on the matter and simply assume that “self-driving” cars are/will be a reality — thereby positing “self-driving” cars as a philosophical question (or set of questions) belonging to the domain of ethics. It might be worthwhile to revisit and seriously consider the idea that there isn’t and in all likelihood will never be such a thing as a “self-driving” car.

Kevin
Kevin
Reply to  Thinker
5 years ago

I agree that Dreyfus’s critique of good old fashioned artificial intelligence (GOFAI) is fascinating and important. However, the deep learning AI that underwrites self-driving cars (and Google Translate, and nearly every other recent AI breakthrough) is not the GOFAI Dreyfus critiqued. Arguably, Dreyfus won the argument against symbolic AI, insofar as the philosophical back drop to deep learning is connectionism. A critique of the “intelligence” of deep learning can be found in the work of two philosophers inspired by Dreyfus: John Haugeland and Brian Cantwell-Smith.

JH
JH
6 years ago

On the issue of whether the trolley problem is any use at all here (I think it’s not) I wrote this a while ago. It seems relevant to the uber crash and was now published: http://theconversation.com/the-everyday-ethical-challenges-of-self-driving-cars-92710

Anco Peeters
6 years ago

Though not about self-driving cars, the BCI-example of this paper can be converted to that of semi-autonomous cars with ‘overseeing’ drivers: Haselager, W.F.G. (2013). Did I do that? Brain-Computer Interfacing and the sense of agency. Minds and Machines 23(3) , 405-418. http://dx.doi.org/10.1007/s11023-012-9298-7

Eloisa
5 years ago

how many crashes are driverless cars allowed? say, as many as there were from 1915 to 1940, while automobiles were becoming familiarized. don’t demand any larger degree of safety than we can infer from the historical record.