Alicorn comments on Open Thread: March 2010, part 3 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (254)
Interesting. Just to be contrary?
Because, as near as I can calculate, UDT advises me too. Like what Wedrifid said.
And like Eliezer said here:
And here:
I am assuming that an agent powerful enough to put me in this situation can predict that I would behave this way.
It is also potentialy serves decision-theoretic purposes. Much like a Dutchess choosing not to pay off her blackmailer. If it is assumed that a cheesecake maximiser has a reason to force you into such a position (rather than doing it himself) then it is not unreasonable to expect that the universe may be better off if Cheesy had to take his second option.