Larks comments on Extraterrestrial paperclip maximizers - Less Wrong

3 Post author: multifoliaterose 08 August 2010 08:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (157)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 09 August 2010 10:24:54AM 6 points [-]

"Fighting" is a narrow class of strategies, while in "trading" I include a strictly greater class of strategies, hence expectation of there being a better strategy within "trading".

Suppose that the paperclip maximizer controlled 70% of the resources and calculated that it had a 90% chance of winning a fight [against staple maximizer]. Then the paperclip maximizer would maximize expected number of paperclips by initiating a fight.

But they'll be even better off without a fight, with staple maximizer surrendering most of its control outright, or, depending on disposition (preference) towards risk, deciding the outcome with a random number and then orderly following what the random number decided.

Comment author: Larks 09 August 2010 11:33:54PM 0 points [-]

Isn't this a Hawk-Dove situation, where pre-committing to fight even if you'll probably lose could be in some AGI's interests, by deterring others from fighting them?

Comment author: Vladimir_Nesov 09 August 2010 11:42:31PM 1 point [-]

Threats are not made to be carried out. Possibility of actual fighting sets the rules of the game, worst-case scenario which the actual play will improve on, to an extent for each player depending on the outcome of the bargaining aspect of the game.

Comment author: Larks 10 August 2010 12:00:44AM 1 point [-]

For a threat to be significant, it has to be believed. In the case of AGI, this probably means the AGI itself being unable to renege on the threat. If two such met, wouldn't fighting be inevitable? If so, how do we know it wouldn't be worthwhile for at least some AGIs to make such a threat, sometimes?

Then again, 'Maintain control of my current level of resources' could be a schelling point that prevents descent into conflict.

But it's not obvious why an AGI would choose to draw their line in the sand their though, when 'current resources plus epsilon% of the commons' is available. The main use of schelling points in human games is to create a more plausible threat, whereas an AGI could just show its source code.

Comment author: Vladimir_Nesov 10 August 2010 12:08:07AM *  1 point [-]

An AGI won't turn itself into a defecting rock, when there is a possibility of pareto improvement over that.