You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

passive_fist comments on I attempted the AI Box Experiment again! (And won - Twice!) - Less Wrong Discussion

36 Post author: Tuxedage 05 September 2013 04:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (163)

You are viewing a single comment's thread.

Comment author: passive_fist 06 September 2013 12:55:03AM 0 points [-]

Is it even necessary to run this experiment anymore? Elezier and multiple other people have tried it and the thesis has been proved.

Further, the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant. However, like all glaringly obvious things, there are inevitably going to be some naysayers. Elezier concieved of the experiment as a way to shut them up. Well, it didn't work, because they're never going to be convinced until an AI is free and rapidly converting the Universe to computronium.

I can understand doing the experiment for fun, but to prove a point? Not necessary.

Comment author: CAE_Jones 06 September 2013 01:03:34AM 2 points [-]

they're never going to be convinced until an AI is free and rapidly converting the Universe to computronium.

Even then, someone will scream "It's just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!"

It also hurts that the transcripts don't get released, so we get legions of people concluding that the conversations go "So, you agree that AI is scary? And if the AI wins, more people will believe FAI is a serious problem? Ok, now pretend to lose to the AI." (Aka the "Eliezer cheated" hypothesis).

Comment author: passive_fist 06 September 2013 01:11:02AM *  1 point [-]

Even then, someone will scream "It's just because the developers were idiots! I could have done better, in spite of having no programming, advanced math or philosophy in my background!"

My favourite one: 'They should have just put it in a sealed box with no contact with the outside world!'

Comment author: Roxolan 06 September 2013 01:29:31AM 0 points [-]

That was a clever hypothesis when there was just the one experiment. The hypothesis doesn't hold after this thread though, unless you postulate a conspiracy willing to lie a lot.

Comment author: chaosmage 09 September 2013 04:59:12PM *  0 points [-]

I don't need to postulate a conspiracy.

If I simply postulate SoundLogic is incompetent as a gatekeeper, the "Eliezer cheated" hypothesis looks pretty good right now.

Comment author: jmmcd 07 September 2013 09:05:20AM 0 points [-]

the thesis was always glaringly obvious to anyone who was even paying attention to what superintelligence meant

I don't see that it was obvious, given that none of the AI players are actually superintelligent.

Comment author: wedrifid 07 September 2013 09:39:20AM 1 point [-]

I don't see that it was obvious, given that none of the AI players are actually superintelligent.

If the finding was that humans pretending to be AIs failed then this would weaken the point. As it happens the reverse is true.

Comment author: jmmcd 08 September 2013 10:25:07PM 0 points [-]

The claim is that it was obvious in advance. The whole reason AI-boxing is interesting is that the AI successes were unexpected, in advance.