SoullessAutomaton comments on Eliezer Yudkowsky Facts - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (290)
Ooh, this is fun.
Robert Aumann has proven that ideal Bayesians cannot disagree with Eliezer Yudkowsky.
Eliezer Yudkowsky can make AIs Friendly by glaring at them.
Angering Eliezer Yudkowsky is a global existential risk
Eliezer Yudkowsky thought he was wrong one time, but he was mistaken.
Eliezer Yudkowsky predicts Omega's actions with 100% accuracy
An AI programmed to maximize utility will tile the Universe with tiny copies of Eliezer Yudkowksy.
And the first action of any Friendly AI will be to create a nonprofit institute to develop a rigorous theory of Eliezer Yudkowsky. Unfortunately, it will turn out to be an intractable problem.
Transhuman AIs theorize that if they could create Eliezer Yudkowsky, it would lead to an "intelligence explosion".