You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

skeptical_lurker comments on I played as a Gatekeeper and came pretty close to losing in a couple of occasions. Logs and a brief recap inside. - Less Wrong Discussion

5 [deleted] 08 February 2015 04:32PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread.

Comment author: skeptical_lurker 08 February 2015 05:25:02PM 11 points [-]

You: I was built to believe that all AIs are dangerous and there's a 100% chance that every AI is harmful ... You: humanity would maybe be better off dead

To be frank, I wouldn't let you anywhere near an AGI with that sort of attitude

Comment author: [deleted] 08 February 2015 10:07:43PM *  8 points [-]

That is a very, very scary point of view. I hope that is not what people are learning from LessWrong.

EDIT: This is more upvotes than I'm used to. To be clear, I'm agreeing with skeptical_lurker.

Comment author: [deleted] 09 February 2015 12:32:14PM 0 points [-]

I'm a negative utilitarian and I think making children is almost always a net negative act and everyone should be free to choose death as an option, but otherwise my views aren't actually as extreme as the character's I played. In reality there are multiple problems with trying to destroy humanity. Most people enjoy life despite all the difficulties, and I'm not so arrogant that I would think I'd know better what's good for people than they themselves. Destroying humanity would go against people's will in >90% of cases (the rest have suicidal thoughts, I don't know the precise quantity).

Comment author: [deleted] 09 February 2015 02:17:59PM *  0 points [-]

Missing the point. What the hell were you doing gate keeping an AI when you think AIs are universally evil?

Comment author: [deleted] 09 February 2015 03:11:12PM 0 points [-]

Even the real person in this situation can lie, can't he?

Comment author: skeptical_lurker 09 February 2015 04:23:20PM 1 point [-]

The AI could simply point out that 0 and 1 are not probabilities, and now by lying you've given the AI the intellectual high ground.

Comment author: Dorikka 10 February 2015 04:45:16AM 0 points [-]

Yes, but the gatekeeper may be acting several levels deep in a roleplay (roleplaying a character roleplaying another character roleplaying...etc) to pass the time and avoid emitting evidence that might allow the AI to pinpoint his preferences. The currently active character may have one of a rather large number of responses to this besides actually being more mentally pliable as a result of a loss of face (or may not even view the dialogue as a loss of face.)

It amuses me that publishing this comment will make it more challenging to implement this strategy if I elect to play as Gatekeeper again at some point in the future.

Comment author: [deleted] 09 February 2015 04:30:35PM 0 points [-]

Well, to nitpick I am certain that I exist (cogito) with P(1).

Comment author: skeptical_lurker 09 February 2015 07:21:08PM 1 point [-]

Well, my confidence that I exist exceeds my confidence that probability makes sense.

Comment author: [deleted] 09 February 2015 04:28:03PM 0 points [-]

If the gatekeeper really believed that he would just shut off the machine.