Ubermess: Philosophers Discuss the Self-Driving Car Crash
On March 19, 2018 a self-driving Uber car-for-hire struck and killed a pedestrian, Elaine Herzberg, in Tempe, Arizona. This was not the first trouble Uber has had with its self-driving cars, nor was it the first fatal crash involving a self-driving car (for example).
Philosophers have been taking up questions regarding the ethics of autonomous vehicles in various contexts for some time now. Last week’s crash gave the topic some added visibility, and several philosophers published work in popular venues on the subject.
- “How Self-Driving Car Policy Will Determine Life, Death and Everything In-Between” by Brett M. Frischmann (Villanova) and Evan Selinger (RIT)
- “Driverless cars raise so many ethical questions. Here are just a few of them.” by Lawrence Hinman (University of San Diego)
- “What the Fatal Uber Crash Doesn’t Tell Us About Self-Driving Cars” by Jesse Kirkpatrick (George Mason) and Ryan Jenkins (Cal Poly)
- “Who’s at Fault in Uber’s Fatal Collision?” by Patrick Lin (Cal Poly)
If you know of other work by philosophers related to the Uber crash, please share it in the comments. Thanks.
(Thanks to Patrick Lin for the pointers.)
Ryan Jenkins (Cal Poly) and Jesse Kirkpatrick (George Mason), “What the Fatal Uber Crash Doesn’t Tell Us About Self-Driving Cars”
I think it is incumbent on philosophers to bring to the attention of the public (whose roads are being used as AI laboratories) the long-range implications of the development of AI to our lives and culture, not simply in terms of programmed “value” decisions, but, should AI ever become “strong (enough)” , dynamic ethical decision – making.
As readers here are well aware, decisions in the case of even simple “Trolley Problem” gedanken experiments defy consensus among even the most earnest and well-intentioned thinkers.
My fear is, without engaging the public, and “democratizing” the directions that AI technology is leading us,, the people who ultimately make those decisions for society will be those who stand to gain the most monetarily, whether entrepreneurs, researchers, or academics.
These issues will ultimately impact all of our lives, and we meat machines – all of us – need to decide what we are willing , or should appropriately cede, to robots.
This is one area where philosophers can continue to stay relevant in all of our lives, despite brickbats from some of the more skeptical scientists and technocrats.
Full Disclosure: I am not a professional philosopher. I studied some philosophy – piccolo – but it was at Pittsburgh so I figure that counts double. 😉Report
I find it absolutely bizarre that in all of these discussions of AI and “self-driving” cars, Hubert Dreyfus is never mentioned among the lists of philosophers said to work on such issues.
I can’t for the life of me figure out why, but ostensibly it seems like people stopped reading his work on the matter and simply assume that “self-driving” cars are/will be a reality — thereby positing “self-driving” cars as a philosophical question (or set of questions) belonging to the domain of ethics. It might be worthwhile to revisit and seriously consider the idea that there isn’t and in all likelihood will never be such a thing as a “self-driving” car.Report
I agree that Dreyfus’s critique of good old fashioned artificial intelligence (GOFAI) is fascinating and important. However, the deep learning AI that underwrites self-driving cars (and Google Translate, and nearly every other recent AI breakthrough) is not the GOFAI Dreyfus critiqued. Arguably, Dreyfus won the argument against symbolic AI, insofar as the philosophical back drop to deep learning is connectionism. A critique of the “intelligence” of deep learning can be found in the work of two philosophers inspired by Dreyfus: John Haugeland and Brian Cantwell-Smith.Report
On the issue of whether the trolley problem is any use at all here (I think it’s not) I wrote this a while ago. It seems relevant to the uber crash and was now published: http://theconversation.com/the-everyday-ethical-challenges-of-self-driving-cars-92710Report
Though not about self-driving cars, the BCI-example of this paper can be converted to that of semi-autonomous cars with ‘overseeing’ drivers: Haselager, W.F.G. (2013). Did I do that? Brain-Computer Interfacing and the sense of agency. Minds and Machines 23(3) , 405-418. http://dx.doi.org/10.1007/s11023-012-9298-7Report
how many crashes are driverless cars allowed? say, as many as there were from 1915 to 1940, while automobiles were becoming familiarized. don’t demand any larger degree of safety than we can infer from the historical record.Report