Multi-Million Euro Award for Philosopher of Artificial Intelligence


Vincent C. Müller, currently professor of philosophy and ethics of technology at the Technical University of Eindhoven, was awarded an Alexander von Humboldt Professorship to support his work on the philosophy of artificial intelligence.

The award includes a grant of €3.5 million (approximately $3.96 million), plus matching funds of €1.5 million from the University of Erlangen-Nürnberg (FAU) university, where he will be taking up a permanent chair in “Philosophy of Artificial Intelligence.”

The award is from the Alexander von Humboldt Foundation, and is one of the largest of its kind in the world. In its announcement, the foundation says:

Artificial intelligence (AI) is gradually entering all areas of our everyday lives. The more AI we use, the more we realise that certain standards and frameworks are needed to ensure that AI is developed and employed in a value-based, responsible and human-centred fashion. Otherwise, exclusion and discrimination could result.

The exploration and application of responsible AI is still a very young discipline and Vincent C. Müller can rightfully be described as one of its pioneers. Even before the topic had triggered broad public debate, he was working on the philosophy and ethics of AI, whereby he also enjoys a very high reputation amongst computer scientists: He is one of the few people outside of informatics to be appointed a fellow of the Alan Turing Institute in London. He also channels his expertise into political consultancy, for example as an expert in the Global Partnership on Artificial Intelligence, a global initiative involving Germany that was launched in 2020 to promote the responsible development and use of AI.

As a Humboldt Professor at FAU, Vincent C. Müller is called upon to build bridges both between technological and humanities expertise as well as beyond the confines of the university to industry and public administration. He is invited to build up a new, international, interdisciplinary Centre for Philosophy and AI Research (PAIR) which will become a hub for AI philosophy at FAU. Cooperation with the Fraunhofer Institute for Integrated Circuits is also foreseen in order to support the development of trustworthy AI systems.

Ten Humboldt professorships were awarded this year, across a range of disciplines. You can learn more about the other recipients here.

Horizons Sustainable Financial Services
Subscribe
Notify of
guest

8 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Sebastian Sweetsugars
2 years ago

Congratulations to Professor Muller for the award, and it is important to study responsible AI. I challenge all ethicists of AI to figure out what we are going to do about these – alleged ‘self replicating robots’ that were discovered today and reported by NPR. I am not claiming to be an expert in AI ethics, but I think we can all agree that robots that can self-replicate are a problem to say the least. Stopping them from self-replicating seems to me (a layperson on this topic) to be a topic of both theoretical and practical urgency. https://www.npr.org/2021/12/01/1060027395/robots-xenobots-living-self-replicating-copy?utm_medium=social&utm_source=facebook.com&utm_campaign=npr&utm_term=nprnews&fbclid=IwAR0rK-0QOx_-VqQXdpIepIBTYsQ2mGcyt1mVpJ_WqZusdGpEec-7jheWJq8

Last edited 2 years ago by Sebastian Sweetsugars
Don L
Reply to  Sebastian Sweetsugars
2 years ago

Dear Sebastian, you are right that the self-replicating robots recently reported are (obviously!) very problematic. However, I don’t think they pose a very obvious theoretical challenge. There are at least two strategies that would OBVIOUSLY contain them. First, anyone who creates a self-replicating robot should be punished in some way, and punished *to a degree* proportionate to the extent of the replication. So, for example, if a scientist creates a self replicating robot that replicates 100 times, then that deserves a stronger punishment than we’d give to a scientist to creates a robot that replicates five times. I think that’s just common sense. Secondly, suppose the self-replication has gotten ‘out of control’ in a lab. There is a way to contain it, using the technique that was used to great effect following Chernobyl — that is to pour concrete on top of the building where this is occurring, effectively ‘sealing in’ the replication process (much like by way of analogy the concrete ‘sealed in’ the toxins from Chernobyl). Both of these strategies would be very effective, and I think they are more or less common sense. Thus, I reject the thought that heavy duty theoretical work is needed to solve this rather straightforward situation.

Linda
Linda
Reply to  Sebastian Sweetsugars
2 years ago

“Xenobots” are not robots. They do not think. They do not work. They do not replicate themselves. They are biological dustpans, designed as such by scientists, who apparently want to make these extreme claims for themselves, of having created life AND living robots.

Examples of what neither man nor AI created:
–Tissue cells want to be tissues, so when they pile up in the solution, they stick together to form tissue. Scientists got them to stick together in their dustpan, C-shape.
–Living cells move somewhat, so a lump of stuck-together cells in a solution will move around. When they do, some single cells in the solution get inside the C-shape and start to pile up and stick together. This, they call “replicating themselves”. But if the free cells in the solution were bird cells, the lumps which collected would be lumps of the cells of birds, not frog cells. They even claim that it’s even cooler that the cells of their so-called self replication don’t even come from them!

That is why they have not replicated themselves. They have not even gathered the cells. The cells randomly piled up in the C-shape, then stuck together, because the tissue cells want to heal and be part of a tissue again. The most they did was use the raw materials of the frog cells to form a c shape which was more efficient at allowing cells to stay inside, collecting them. But that was human ingenuity, not AI.

James Barlow
2 years ago

Regrettably what academic philosophers might come up with in terms of parameters for AI in terms of humanistic ethics will be inevitably trumped by the technomilitary establishment and the inherent greed of the financial institutions that run the decadent Western world, including its already dehumanized research university culture.

Valery Konevin
Valery Konevin
Reply to  James Barlow
2 years ago

Not only regrettable but unavoidable in state of affairs after Darwin Marks and the academical establishment of the West after Holmsky. Anarchy and Holmsky are two products of the same kind selfreplicating without any control. It is too late and useless to sponsor a Messiah from the same kind of products.

Muhammad Aljukhadar
2 years ago

Congratulations 🎉

Wolfgang Stegemann
2 years ago

It is good to first clarify the problem of consciousness from the philosophical side. But only if the approach is good. On my page you can find keywords for such an approach as well as a model that could be implemented in AI.

Vincent Müller
1 year ago

We have started to work.
https://www.pair.fau.de/