Seth Lazar, associate professor of philosophy at Australian National University (ANU), is leading an interdisciplinary project on machine intelligence that just received a funding commitment from its university of AUD$1.65 million (US$1.17 million) per year for up to five years.
He is joined on the project by fellow ANU philosophers Colin Klein and Katie Steele, computer scientists Marcus Hutter, Sylvie Thiébaux, Lexing Xie, and Robert Williamson, as well as sociologist Jenny Davis, political scientist Toni Erskine, and economist Idione Meneghel.
The project, “Humanising Machine Intelligence,” Lazar says, is “aimed at designing more ethical machine intelligence.” The project website summarizes the motivation for the project:
New technologies always bear the stamp of their designers’ values. For machine intelligence, they’re deeply etched in the code. AI sees the world through the data that we provide and curate. Its choices reflect our priorities. Its unintended consequences voice our indifference. Machine intelligence cannot be morally neutral. We must choose: try to design moral machine intelligence, which sees the world fairly and chooses justly; or else build the next industrial revolution on immoral machines. To design moral machine intelligence, we must understand how our perspectives reflect power and prejudice. We must understand our priorities, and how to represent them in terms a machine can act on. And we must break new ground in machine learning and AI research.