Comment author:V_V
22 January 2013 03:01:13AM
*
-2 points
[-]
"This Eliezer fellow is the scariest person the internet has ever introduced me to. What could possibly have been at the tail end of that conversation? I simply can't imagine anyone being that convincing without being able to provide any tangible incentive to the human.
After all, if you already knew that argument, you'd have let that AI out the moment the experiment started. Or perhaps not do the experiment at all. But that seems like a case of the availability heuristic.
Oh, come on! Maybe the people he who played this game with Yudkowsky and lost colluded with him, or they were just thinking poorly. Why won't him release at least the logs of the games he lost? Clearly, whatever trick he allegedly used it didn't work those times.
Seriously, this AI-box game serves no other purpose than creating an aura of mysticism around the magical guru with alleged superpowers. It provides no evidence on the question of the feasibility of boxing an hostile intelligence, because the games are not repetable and it's not even possible to verify that they were played properly.
Oh, come on! Maybe the people he who played this game with Yudkowsky and lost colluded with him, or they were just thinking poorly. Why won't him release at least the logs of the games he lost? Clearly, whatever trick he allegedly used it didn't work those times.
Seriously, this AI-box game serves no other purpose than creating an aura of mysticism around the magical guru with alleged superpowers. It provides no evidence on the question of the feasibility of boxing an hostile intelligence, because the games are not repetable and it's not even possible to verify that they were played properly.
Every biological human will be thinking poorly in comparison to a transhuman AI.