NOTE: Due to the recent surge in cases, we ask that you either be vaccinated or test negative just before the event (using both tests in the kit as recommended), or both.
In 2011, Eliezer Yudkowsky gave an obscure but fascinating presentation on evolution and "outcome pumps". It has garnered only 7.8K views on youtube over the last decade [1]. These two processes are examples of optimization systems that can exhibit seemingly intelligent behaviors, yet these behaviors differ radically from human behaviors and preferences. The existence of such baffling processes raises the possibility that the first generally-intelligent AI systems could be similarly baffling for humans to reason about, and may act contrary to human wishes.
What sort of intelligent systems are possible? Can we assume that any intelligent non-human entities we create will necessarily have anything in common with us? Could it be that they automatically develop notions of morality similar to ours? Would their ways of thinking be at all similar to our ways of thinking?
Many people currently working in AI alignment will answer "No" to the last three questions above: The Orthogonality Thesis [2], which is widely accepted in the AI community, argues that intelligence could theoretically come in many different varieties, and there is no good reason to assume that the property of "intelligence" is at all correlated with other properties of humans, such as our notions of morality.
At the January Chicago Rationality Meetup (Jan. 8 @ 2 PM), we will be discussing these questions, Eliezer's presentation, and the Orthogonality thesis. Also, FYI, generally, topic meetups are the first Saturday of the month at 2 PM.
NOTE: Due to the recent surge in cases, we ask that you either be vaccinated or test negative just before the event (using both tests in the kit as recommended), or both.
In 2011, Eliezer Yudkowsky gave an obscure but fascinating presentation on evolution and "outcome pumps". It has garnered only 7.8K views on youtube over the last decade [1]. These two processes are examples of optimization systems that can exhibit seemingly intelligent behaviors, yet these behaviors differ radically from human behaviors and preferences. The existence of such baffling processes raises the possibility that the first generally-intelligent AI systems could be similarly baffling for humans to reason about, and may act contrary to human wishes.
What sort of intelligent systems are possible? Can we assume that any intelligent non-human entities we create will necessarily have anything in common with us? Could it be that they automatically develop notions of morality similar to ours? Would their ways of thinking be at all similar to our ways of thinking?
Many people currently working in AI alignment will answer "No" to the last three questions above: The Orthogonality Thesis [2], which is widely accepted in the AI community, argues that intelligence could theoretically come in many different varieties, and there is no good reason to assume that the property of "intelligence" is at all correlated with other properties of humans, such as our notions of morality.
At the January Chicago Rationality Meetup (Jan. 8 @ 2 PM), we will be discussing these questions, Eliezer's presentation, and the Orthogonality thesis. Also, FYI, generally, topic meetups are the first Saturday of the month at 2 PM.
[1] Eliezer's presentation: https://www.youtube.com/watch?v=Uoda5BSj_6o
[2] The Orthogonality Thesis: https://www.youtube.com/watch?v=hEUO6pjwFOo
We'll be meeting here, on Saturday January 8 at 2 PM:
South Loop Strength & Conditioning (upstairs in the mezzanine)
645 S Clark St, Chicago, IL 60605
Posted on: