Eliezer_Yudkowsky comments on The Strangest Thing An AI Could Tell You - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (574)
You know, as soon as I finished reading this sentence, and before reading anything else, the same cognitive template that produced the AI-Box Experiment immediately said, "I bet I can tell him something stranger, never mind an AI."
Did Eliezer have a specific thing in mind? I thought he meant that - like in the AI Box experiment - he suspects a human could already do what it's being predicted a superintelligence could not. Without yet knowing how.
I can have an intuition about the solvability of a problem without much clue about how to solve it, and definitely without a set of possible solutions in mind.
Maybe he has a mathematical model.