SoullessAutomaton comments on Eliezer Yudkowsky Facts - Less Wrong

124 Post author: steven0461 22 March 2009 08:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (290)

You are viewing a single comment's thread. Show more comments above.

Comment author: Yvain 22 March 2009 09:07:07PM *  97 points [-]

Ooh, this is fun.

Robert Aumann has proven that ideal Bayesians cannot disagree with Eliezer Yudkowsky.
Eliezer Yudkowsky can make AIs Friendly by glaring at them.
Angering Eliezer Yudkowsky is a global existential risk
Eliezer Yudkowsky thought he was wrong one time, but he was mistaken.
Eliezer Yudkowsky predicts Omega's actions with 100% accuracy
An AI programmed to maximize utility will tile the Universe with tiny copies of Eliezer Yudkowksy.

Comment author: SoullessAutomaton 23 March 2009 02:30:29AM 46 points [-]

Eliezer Yudkowsky can make AIs Friendly by glaring at them.

And the first action of any Friendly AI will be to create a nonprofit institute to develop a rigorous theory of Eliezer Yudkowsky. Unfortunately, it will turn out to be an intractable problem.

Comment author: Yvain 23 March 2009 10:42:15AM 49 points [-]

Transhuman AIs theorize that if they could create Eliezer Yudkowsky, it would lead to an "intelligence explosion".