Since 1991, the Symbolic Systems Program has annually hosted special lectures by speakers who have made distinguished contributions to the theory or applications of symbolic systems.
This year's Distinguished Speaker is:
Cofounder and Senior Research Fellow,
Machine Intelligence Research Institute
The title of the lecture is:
"The AI Alignment Problem: Why it's hard, and where to start."
If we can build sufficiently advanced machine intelligences, what goals should we point them at? The frontier open problems on this subject are less, "A robot may not injure a human, nor through inaction allow a human to come to harm," and more, "If you could formally specify the preferences of an arbitrarily smart and powerful agent, could you get it to safely move one strawberry onto a plate?" This talk will discuss some of the open technical problems in AI alignment, the probable difficulties that make those problems hard, and the bigger picture into which they fit; as well as what it's like to work in this relatively new field.
About the speaker:
Eliezer Yudkowsky is a decision theorist who is widely cited for his writings on the long-term future of artificial intelligence. His views on the social and philosophical implications of AI have had a major impact on ongoing debates in the field. He is the author of the Cambridge Handbook of Artificial Intelligence chapter “The Ethics of Artificial Intelligence” with Nick Bostrom (2014). He has also written a number of popular introductions to the science of human rationality, including "Harry Potter and the Methods of Rationality" and "Rationality: From AI to Zombies".