lavalamp comments on AI Box Role Plays - Less Wrong

5 Post author: lessdazed 22 January 2012 07:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread. Show more comments above.

Comment author: lavalamp 22 January 2012 10:04:03PM 4 points [-]

... more than 30% confident that a transhuman could not get past me.

I could see values > 95% (tape my eyes shut, put in ear-plugs and type "nonononono" for the duration), or values < 5% (actually speak with the transhuman intelligence). But values in the middle seem to indicate that you think you'll have a conversation with the transhuman intelligence, evaluate the arguments and be left more-or-less on the fence.

It just seems to me that there's this tiny target of intelligence that is exactly the correct amount smarter than you to make you indecisive, and beyond that it will manage to convince you overwhelmingly.

Anyway, good luck :)

Comment author: Normal_Anomaly 22 January 2012 10:43:50PM 1 point [-]

Values in the middle indicate that I'll have a conversation and probably not budge, with a chance of being totally convinced. But I am now convinced that the whole idea of boxing is stupid. Why would I honestly have a conversation? Why would I run a transhuman AI that I didn't want to take over the world? What could I learn from a conversation that I wouldn't already know from the source code. other than that it doesn't immediately break? And why would I need to check that the AI doesn't immediately break, unless I wanted to release it?

Comment author: ArisKatsaris 23 January 2012 01:26:33PM *  4 points [-]

Why would I honestly have a conversation? Why would I run a transhuman AI that I didn't want to take over the world?

Because you'd want to know how to cure cancer, how to best defeat violent religious fundamentalism, etc, etc. If you want to become President, the AI may need to teach you argumentation techniques. And so forth.

Comment author: lavalamp 22 January 2012 11:58:40PM 0 points [-]

Values in the middle indicate that I'll have a conversation and probably not budge, with a chance of being totally convinced.

Ah, so it's more like the probability that the intelligence in the box is over the threshold required to convince you of something. That makes sense.

But I am now convinced that the whole idea of boxing is stupid.

Agreed. Everything you said, plus: if you think there's a chance that your boxed AI might be malicious/unFriendly, talking to it has to be one of the stupidest things you could possibly do...