"This Eliezer fellow is the scariest person the internet has ever introduced me to. What could possibly have been at the tail end of that conversation? I simply can't imagine anyone being that convincing without being able to provide any tangible incentive to the human.
After all, if you already knew that argument, you'd have let that AI out the moment the experiment started. Or perhaps not do the experiment at all. But that seems like a case of the availability heuristic.
Oh, come on! Maybe the people he who played this game with Yudkowsky and lost colluded with him, or they were just thinking poorly. Why won't him release at least the logs of the games he lost? Clearly, whatever trick he allegedly used it didn't work those times.
Seriously, this AI-box game serves no other purpose than creating an aura of mysticism around the magical guru with alleged superpowers. It provides no evidence on the question of the feasibility of boxing an hostile intelligence, because the games are not repetable and it's not even possible to verify that they were played properly.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Are you claiming that Yudkowsky is a transhuman AI?
Of course not - but dismissing Yudkowsky's victories because the gatekeepers were "thinking poorly" makes no sense.
Because any advantages Yudkowsky had over the gatekeepers (such as more time and mental effort spent thinking about his strategy, plus any intellectual advantages he has) that he exploited to make the gatekeepers "think poorly", pales into insignificance to the advantages a transhuman AI would have.