Eliezer_Yudkowsky comments on The Strangest Thing An AI Could Tell You - Less Wrong

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (574)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 15 July 2009 05:16:56PM 20 points [-]

I don't think that an AI would be able to tell me anything stranger than I have already learned in the last 10 years of my life:

You know, as soon as I finished reading this sentence, and before reading anything else, the same cognitive template that produced the AI-Box Experiment immediately said, "I bet I can tell him something stranger, never mind an AI."

Comment deleted 15 July 2009 08:03:08PM *  [-]
Comment author: AllanCrossman 15 July 2009 09:34:52PM *  3 points [-]

Will you email me and tell me this odd thing

Did Eliezer have a specific thing in mind? I thought he meant that - like in the AI Box experiment - he suspects a human could already do what it's being predicted a superintelligence could not. Without yet knowing how.

Comment deleted 15 July 2009 09:36:18PM [-]
Comment author: GuySrinivasan 15 July 2009 09:42:11PM 5 points [-]

I can have an intuition about the solvability of a problem without much clue about how to solve it, and definitely without a set of possible solutions in mind.

Comment deleted 15 July 2009 11:42:20PM *  [-]
Comment author: Peter_de_Blanc 16 July 2009 08:41:58PM 2 points [-]

Maybe he has a mathematical model.