rhollerith_dot_com comments on The Strangest Thing An AI Could Tell You - Less Wrong

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (574)

You are viewing a single comment's thread. Show more comments above.

Comment author: rhollerith_dot_com 15 July 2009 05:47:17PM *  4 points [-]

The craziest true thing I can imagine right now that Eliezer's hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.

Maybe I am playing the game wrong.

I interpreted the object of the game to be to minimize the probability that Eliezer currently assigns to my response to Eliezer question (what is the craziest thing that . . .) because Eliezer is blinded by anosognosia or by an "absolute denial macro".

That is the only interpretation that I could imagine that would assign a sensible motive for Eliezer to ask his question (what is the craziest thing that . . .) and to define the game.

But maybe I am just not smart enough to play this game that Eliezer has defined.

EDIT. Oh wait. I just imagined a second interpretation that gives Eliezer a sensible motive -- that motive's being to cause the reader of Eliezer's post to do for himself what under my first interpretation I was attempting to do for Eliezer. In other words, I am supposed to imagine what truth I am denying.

A third interpretation is that his motive is for us to respond with a statement that the entire human civilization is denying but is actually true -- in which case I stick to my original response, which I will now repeat:

The craziest true thing I can imagine right now that Eliezer's hypothetical inhumanly well-calibrated AI could tell me is that the project of Eliezer and his friends will succeed and the EV defined by Eliezer and his friends coheres and does not care how much suffering exists in the universe.

The probability that I assign to the event that CEV goes that way is probably higher than any other humans. In addition, two humans I know of probably assign it a probability above 1 or 2%. I cannot rule out the possibility of humans I have not discussed this issue with also assigning it a probability above 1 or 2%, but surely the vast majority of humans are "absolutely denying" this, i.e., assigning it a probability under .01%