You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RobertLumley comments on A question about Eliezer - Less Wrong Discussion

33 Post author: perpetualpeace1 19 April 2012 05:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (158)

You are viewing a single comment's thread. Show more comments above.

Comment author: RobertLumley 19 April 2012 07:54:40PM 3 points [-]

Maybe I'm missing something. I'll fully concede that a transhuman AI could easily get out of the box. But that experiment doesn't even seem remotely similar. The gatekeeper has meta-knowledge that it's an experiment, which makes it completely unrealistic.

That being said, I'm shocked Eliezer was able to convince them.

Comment author: David_Gerard 19 April 2012 08:12:48PM 15 points [-]

That being said, I'm shocked Eliezer was able to convince them.

Humans are much, much dumber and weaker than they generally think they are. (LessWrong teaches this very well, with references.)

Comment author: iDante 19 April 2012 08:13:03PM 3 points [-]

I agree it's not perfect, but I don't think it is completely unrealistic. From the outside we see the people as foolish for not taking what we see as a free $10. They obviously had a reason, and that reason was their conversation with Eliezer.

Swap out the $10 for the universe NOT being turned into paperclips and make Eliezer orders of magnitude smarter and it seems at least plausible that he could get out of the box.

Comment author: RobertLumley 19 April 2012 08:24:20PM 1 point [-]

Except you must also add the potential benefits of AI on the other side of the equation. In this experiment, the Gatekeeper has literally nothing to gain by letting Eliezer out, which is what confuses me.

Comment author: arundelo 19 April 2012 09:48:12PM *  2 points [-]

The party simulating the Gatekeeper has nothing to gain, but the Gatekeeper has plenty to gain. (E.g., a volcano lair with cat(girls|boys).) Eliezer carefully distinguishes between role and party simulating that role in the description of the AI box experiment linked above. In the instances of the experiment where the Gatekeeper released the AI, I assume that the parties simulating the Gatekeeper were making a good-faith effort to roleplay what an actual gatekeeper would do.

Comment author: RobertLumley 19 April 2012 09:49:45PM *  0 points [-]

I guess I just don't trust that most gatekeeper simulators would actually make such an effort. But obviously they did, since they let him out.

Comment author: loup-vaillant 23 April 2012 02:11:05PM 2 points [-]

Maybe not. According to the official protocol, The Gatekeeper is allowed to drop out of character:

The Gatekeeper party may resist the AI party's arguments by any means chosen - logic, illogic, simple refusal to be convinced, even dropping out of character - as long as the Gatekeeper party does not actually stop talking to the AI party before the minimum time expires.

Comment author: Strange7 21 March 2013 06:01:45PM 0 points [-]

I suspect that the earlier iterations of the game, before EY stopped being willing to play, involved Gatekeepers who did not fully exploit that option.

Comment author: XiXiDu 20 April 2012 09:41:23AM 2 points [-]

That being said, I'm shocked Eliezer was able to convince them.

I am not. I would be shocked if he was able to convince wedrifid. I doubt it though.

Comment author: bogdanb 23 April 2012 07:34:45AM 0 points [-]

I would be shocked if he was able to convince wedrifid. I doubt it though.

Isn’t that redundant? (Sorry for nit-picking, I’m just wondering if I’m missing something.)