Oxford Launches Institute for Ethics in AI with Team of Philosophers


Oxford University is bringing on three philosophy professors, two philosophy postdoctoral fellows, and two philosophy graduate students to comprise the initial academic team for its new Institute for Ethics in Artificial Intelligence.

The Institute is part of the Oxford’s Philosophy Faculty, and its creation was part of an agreement reached with businessman Stephen A. Schwarzman when he donated £150,000,000 to the university last year. “The Institute aims to tackle major ethical challenges posed by AI, from face recognition to voter profiling, brain machine interfaces to weaponised drones, and the ongoing discourse about how AI will impact employment on a global scale,” according the university. Some of its work will concern the COVID-19 pandemic and responses to it.

John Tasioulas, currently director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law at King’s College London, will become the inaugural Director of the Institute in October.

Carissa Véliz, formerly a research fellow at Oxford’s Uehiro Centre for Practical Ethics and the Wellcome Centre for Ethics and Humanities, has joined the Institute as an associate professor in philosophy and is a tutorial fellow at Hertford College.

Milo Phillips-Brown, currently a fellow in ethics and technology at Massachusetts Institute of Technology and senior research fellow in digital ethics and governance at the Jain Family Institute, will be an associate professor in philosophy at the Institute and a tutorial fellow at Jesus College.

Two Postdoctoral Research Fellows, Carina Prunkl and Ted Lechterman, will join as postdoctoral research fellows from Oxford University’s Future of Humanity Institute and the Hertie School of Governance in Berlin, respectively.

You can learn more about the Institute here.

USI Switzerland Philosophy
Subscribe
Notify of
guest

13 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Kenny Easwaran
3 years ago

Somehow the link for Milo Phillips-Brown goes to apple.com instead of to his website! (Feel free to delete this comment after it’s fixed.)

Alternative view
Alternative view
3 years ago

Do any of the appointments know how to program a neural network? Do they have basic knowledge in the field? Imagine a philosopher of law saying I’ve read about cases and spoken to lawyers, but never read a case myself. An institute of ethics for ai sounds sexy, but what about an institute of ethics for machine learning – it becomes immediately apparent that the ethical difficulties here are less philosophical, more practical applications

JL
JL
Reply to  Alternative view
3 years ago

Why assume that they don’t, rather than that they do? If they got this much funding, they probably had a pretty good story to tell. (Note also what it says on the website about their technical partners.)

Alternative
Alternative
Reply to  JL
3 years ago

So the answer is “no”. There’s nothing to indicate that they have these skills, which one doesn’t presume on the basis of a philosophy PhD or “a pretty good story”.

Anco
Reply to  Alternative
3 years ago

Seeing as how Carina Prunkl studied quantum mechanics for her bachelor and master degrees in physics and later wrote her doctoral thesis on the philosophy of quantum mechanics, I think she’ll do fine learning about neural networks if she hasn’t already (https://www.carinaprunkl.com/pagecv).

Milo Phillips-Brown worked on ethical case studies during machine learning courses given to Electrical Engineering and Computer Science students at MIT, so I’m sure he knows a thing or two about the subject matter as well (https://www.milopb.com/).

I find these anonymous speculations about their supposed lack of expertise somewhat disturbing. Did you even check their profiles? It seems to me that between the different group members they have covered a nice spectrum of the social, political, legal, and technical dimensions of the ethics of AI.

Alternative
Alternative
Reply to  Anco
3 years ago

“I find these anonymous speculations about their supposed lack of expertise somewhat disturbing.” I disagree. I think scrutiny of public institutions and how they use money is entirely appropriate. Given this centre specialises in AI, and is responsible for managing a $188 million budget, I don’t think it’s too much to ask that their 5 staff collectively have more demonstrated experience of AI than that one of them taught a couple of ethical issues on a course once. Try putting in a $200 million grant on the basis of having taught a couple of sessions on a course or “reading a handbook, talking to AI researchers, playing around with a web-based tutorial” and see how that goes.

Put it another way, imagine they’d staffed it with five AI researchers, but one of them had taught a couple of sessions on a course on ethics. Would that be ok? Why think the ethics dimension is any harder to pick up than the AI dimension?

Anco
Reply to  Alternative view
3 years ago

As someone who has separate degrees in both AI and philosophy and works on the ethics of AI, I will say that the added value of being able to program neural networks is relatively low in discussions on AI ethics. Sure, knowing the basic workings of machine learning is a boon for philosophers working in this field, but such knowledge can be readily gleaned from reading a handbook, talking to AI researchers, or playing around with a web-based tutorial.

J Markley
J Markley
Reply to  Anco
3 years ago

Can I ask, which handbook would you recommend, and which websites to play around with? To get a sufficient working knowledge for working on AI ethics? Thanks!

Anco
Reply to  J Markley
3 years ago

The handbooks and websites I was talking about were not about AI ethics but about the workings of neural networks and studying machine learning. David Poole and Alan Mackworth have written an excellent open access general handbook about AI and gathered online web-resources here: http://artint.info/ (also published with Cambridge UP). They also provide pseudo-code examples and algorithm demos. This will only cover the basics of course, and one ought to bring their philosophical expertise to bear to think through common ethical issues concerning biases in the selection of training sets (though I’m sure there must be papers on this somewhere). Russell and Norvig have also written a standard textbook on AI which contains work-through examples. But theirs is unfortunately not freely available online.

If one wants to go deeper into neural networks specifically, then I can recommend this handbook: http://www.dkriesel.com/en/science/neural_networks. It comes with the caveat that I used it during my own studies, which were several years ago, and I haven’t checked if there have been new developments that need to be covered. (But it is my impression that the basic ideas behind neural networks have hardly changed in the last few decades even, just the scale and method of their application.)

shai ben-david
shai ben-david
Reply to  Anco
3 years ago

The knowledge of machine learning that “can be readily gleaned from reading a handbook, talking to AI researchers, or playing around with a web-based tutorial” is way too shallow for udnerstanding the basic issues. Would you settle for similar level of knowledge about medicine from the researchers of an institute af Ethics in Medicine? I guess that this is about the level of your knowledge. Maybe some awareness of your limitations?

Anco
Reply to  shai ben-david
3 years ago

To start with your implied rhetorical question: I actually don’t think a medical ethicist needs to know of the chemical composition of particular medicines or the precise anatomy of the human body in order to be a good ethicist and say something fruitful about, for instance, euthanasia. Mind you, I did call it a ‘boon’ for an AI ethicist to know something about the inner workings of neural networks, and I would say the same for a medical ethicist. Could you perhaps provide an example of a case where having the knowledge of handbooks such as the one I linked to above would be insufficient when discussing an ethical AI issue?

As for your ad hominem: I am sadly painfully aware of the limitations of my knowledge. If I weren’t, my career would likely be much more successful.

shai ben-david
shai ben-david
3 years ago

It’s not about “being able to program neural networks” BUT, what about some expertice in machine learning? (n understanding how AI works – what it can and waht it cannot do? What about understanding the challenges of translating ideas into algorithms? There does not seem to be any of that in the academic leadership of this institute!

Why Philosophy?
Why Philosophy?
3 years ago

The recent firing of Timnit Gebru really throws into sharp relief the discussions on here in September. Why is the response to AI Ethics more analytic philosophers, and not ethicists like Timnit Gebru? What skills do analytic philosophers bring that ethicists like Timnit Gebru lack? Whilst it is obvious the depth of technical understanding that ethicists like Timnit Gebru bring. I pointedly write “ethicists like Timnit Gebru”, because there seems to be an assumption that only analytic philosophers can be good ethicists in the AI domain, or anywhere else.