You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Keysersoze comments on I attempted the AI Box Experiment (and lost) - Less Wrong Discussion

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 22 January 2013 03:01:13AM *  -2 points [-]

"This Eliezer fellow is the scariest person the internet has ever introduced me to. What could possibly have been at the tail end of that conversation? I simply can't imagine anyone being that convincing without being able to provide any tangible incentive to the human.

After all, if you already knew that argument, you'd have let that AI out the moment the experiment started. Or perhaps not do the experiment at all. But that seems like a case of the availability heuristic.

Oh, come on! Maybe the people he who played this game with Yudkowsky and lost colluded with him, or they were just thinking poorly. Why won't him release at least the logs of the games he lost? Clearly, whatever trick he allegedly used it didn't work those times.

Seriously, this AI-box game serves no other purpose than creating an aura of mysticism around the magical guru with alleged superpowers. It provides no evidence on the question of the feasibility of boxing an hostile intelligence, because the games are not repetable and it's not even possible to verify that they were played properly.

Comment author: Keysersoze 22 January 2013 03:44:11AM 6 points [-]

, or they were just thinking poorly.

Every biological human will be thinking poorly in comparison to a transhuman AI.