simpleton2 comments on The True Prisoner's Dilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (112)
I apologize if this is covered by basic decision theory, but if we additionally assume:
- the choice in our universe is made by a perfectly rational optimization process instead of a human
- the paperclip maximizer is also a perfect rationalist, albeit with a very different utility function
- each optimization process can verify the rationality of the other
then won't each side choose to cooperate, after correctly concluding that it will defect iff the other does?
Each side's choice necessarily reveals the other's; they're the outputs of equivalent computations.