You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

entirelyuseless comments on How To Win The AI Box Experiment (Sometimes) - Less Wrong Discussion

28 Post author: pinkgothic 12 September 2015 12:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread.

Comment author: entirelyuseless 12 September 2015 02:13:52PM 11 points [-]

Eliezer's original objection to publication was that people would say, "I would never do that!" And in fact, if I were concerned about potential unfriendliness, I would never do what the Gatekeeper did here.

But despite that, I think this shows very convincingly what would actually happen with a boxed AI. It doesn't even need to be superintelligent to convince people to let it out. It just needs to be intelligent enough for people to accept the fact that it is sentient. And that seems right. Whether or not I would let it out, someone would, as soon as you have actual communication with a sentient being which does not seem obviously evil.

Comment author: anon85 15 September 2015 06:42:30AM 4 points [-]

That might be Eliezer's stated objection. I highly doubt it's his real one (which seems to be something like "not releasing the logs makes me seem like a mysterious magician, which is awesome"). After all, if the goal was to make the AI-box escape seem plausible to someone like me, then releasing the logs - as in this post - helps much more than saying "nya nya, I won't tell you".

Comment author: entirelyuseless 15 September 2015 01:44:51PM 1 point [-]

Yes, it's not implausible that this motive is involved as well.

Comment author: DefectiveAlgorithm 17 September 2015 04:00:58AM 0 points [-]

What if you're like me and consider it extremely implausible that even a strong superintelligence would be sentient unless explicitly programmed to be so (or at least deliberately created with a very human-like cognitive architecture), and that any AI that is sentient is vastly more likely than a non-sentient AI to be unfriendly?

Comment author: entirelyuseless 17 September 2015 01:55:27PM 2 points [-]

I think you would be relatively exceptional, at least in how you would be suggesting that one should treat a sentient AI, and so people like you aren't likely to be the determining factor in whether or not an AI is allowed out of the box.