Silas comments on Shut up and do the impossible! - Less Wrong

28 Post author: Eliezer_Yudkowsky 08 October 2008 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Silas 09 October 2008 05:00:30PM 1 point [-]

@Russell_Wallace & Ron_Garret: Then I must confess the protocol is ill-defined to the point that it's just a matter of guessing what secret rules Eliezer_Yudkowsky has in mind (and which the gatekeeper casually assumed), which is exactly why seeing the transcript is so desirable. (Ironically, unearthing the "secret rules" people adhere to in outputting judgments is itself the problem of Friendliness!)

From my reading, the rules literally make the problem equivalent to whether you can convince people to give money to you: They must *know* that letting the AI out of the box means ceding cash, and that not losing that cash is simply a matter of not being willing to.

So that leaves only the possibility that the gatekeeper feels obligated to take on the frame of some other mind. That reduces AI's problem to the problem of whether a) you can convince the gatekeeper that *that* frame of mind would let the AI out, and b) that, for purposes of that amount of money, they are ethically obligated to let the experiment end as per how that frame of mind would.

...which isn't what I see as the protocol specifying: it seems to me to instead specify the participant's own mind, not some mind he imagines. Which is why I conclude the test is too ill-defined.