cousin_it comments on Open Thread: May 2010 - Less Wrong

3 Post author: Jack 01 May 2010 05:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (543)

You are viewing a single comment's thread.

Comment author: cousin_it 19 May 2010 10:17:27AM *  3 points [-]

Rolf Nelson's AI deterrence doesn't work for Schellingian reasons: the Rogue AI has incentive to modify itself to not understand such threats before it first looks at the outside world. This makes you unable to threaten, because when you simulate the Rogue AI you will see its precommitment first. So the Rogue AI negates your "first mover advantage" by becoming the first mover in your simulation :-) Discuss.

Comment author: rolf_nelson 20 May 2010 02:14:56AM 1 point [-]

I agree that AI deterrence will necessarily fail if:

  1. All AI's modify themselves to ignore threats from all agents (including ones it considers irrational), and

  2. any deterrence simulation counts as a threat.

Why do you believe that both or either of these statements are true? Do you have some concrete definition of 'threat' in mind?

Comment author: cousin_it 20 May 2010 07:05:55AM *  0 points [-]

I don't believe statement 1 and don't see why it's required. After all, we are quite rational, and so is our future FAI.

Comment author: Vladimir_Nesov 19 May 2010 11:24:46AM 0 points [-]

The notion of "first mover" is meaningless, where the other player's program is visible from the start.