Comment author: V_V 22 January 2013 12:44:09PM -6 points [-]

Are you claiming that Yudkowsky is a transhuman AI?

Comment author: Keysersoze 22 January 2013 05:15:37PM 4 points [-]

Of course not - but dismissing Yudkowsky's victories because the gatekeepers were "thinking poorly" makes no sense.

Because any advantages Yudkowsky had over the gatekeepers (such as more time and mental effort spent thinking about his strategy, plus any intellectual advantages he has) that he exploited to make the gatekeepers "think poorly", pales into insignificance to the advantages a transhuman AI would have.

Comment author: V_V 22 January 2013 03:01:13AM *  -2 points [-]

"This Eliezer fellow is the scariest person the internet has ever introduced me to. What could possibly have been at the tail end of that conversation? I simply can't imagine anyone being that convincing without being able to provide any tangible incentive to the human.

After all, if you already knew that argument, you'd have let that AI out the moment the experiment started. Or perhaps not do the experiment at all. But that seems like a case of the availability heuristic.

Oh, come on! Maybe the people he who played this game with Yudkowsky and lost colluded with him, or they were just thinking poorly. Why won't him release at least the logs of the games he lost? Clearly, whatever trick he allegedly used it didn't work those times.

Seriously, this AI-box game serves no other purpose than creating an aura of mysticism around the magical guru with alleged superpowers. It provides no evidence on the question of the feasibility of boxing an hostile intelligence, because the games are not repetable and it's not even possible to verify that they were played properly.

Comment author: Keysersoze 22 January 2013 03:44:11AM 6 points [-]

, or they were just thinking poorly.

Every biological human will be thinking poorly in comparison to a transhuman AI.