You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

V_V comments on I attempted the AI Box Experiment (and lost) - Less Wrong Discussion

47 Post author: Tuxedage 21 January 2013 02:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 23 January 2013 04:52:41PM 1 point [-]

I think you are right, but could you explain why please?

"If you destroy me at once, then you are implicitly deciding (I might reference TDT) to never allow an AGI of any sort to ever be created."

Whether I destroy that particular AI bears no relevance on the destiny of other AIs. In fact, as far as the boxed AI knows, there could be tons of other AIs already in existence. As far as it knows, the gatekeeper itself could be an AI.

(Unfortunately I expect readers who read a retort they consider rude to be thereafter biased in favor of treating the parent as if it has merit. This can mean that such flippant rejections have the opposite influence to that intended.)

I don't care.

Comment author: wedrifid 23 January 2013 05:04:40PM *  4 points [-]

I don't care.

Much can (and should) be deduced about actual motives for commenting from an active denial of any desire for producing positive consequences or inducing correct beliefs in readers.

I do care. It bothers me (somewhat) when people I agree with end up supporting the opposite position due to poor social skills or terrible argument. For some bizarre reason the explanation that you gave here isn't as obvious to some as it could have been. And now it is too late for your actual reasons to be seen and learned from.