simpleton2 comments on The True Prisoner's Dilemma - Less Wrong

53 Post author: Eliezer_Yudkowsky 03 September 2008 09:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

Sort By: Old

You are viewing a single comment's thread.

Comment author: simpleton2 04 September 2008 08:17:45AM 7 points [-]

I apologize if this is covered by basic decision theory, but if we additionally assume:

- the choice in our universe is made by a perfectly rational optimization process instead of a human

- the paperclip maximizer is also a perfect rationalist, albeit with a very different utility function

- each optimization process can verify the rationality of the other

then won't each side choose to cooperate, after correctly concluding that it will defect iff the other does?

Each side's choice necessarily reveals the other's; they're the outputs of equivalent computations.