# Manfred comments on John Danaher on 'The Superintelligent Will' - Less Wrong Discussion

5 03 April 2012 03:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

You are viewing a single comment's thread.

Comment author: 03 April 2012 03:49:35PM *  5 points [-]

Hm, the Future Tuesday Indifference example is an interesting one. The reason it seems reflectively incoherent is because it violates an expected utility axiom if interpreted the typical way. If you calculate the expected utility of an option, but forget to add in the expected utility from future Tuesdays, you simply get the wrong answer.

However, interestingly, you can't self-modify to being a normal hedonist with only causal decision theory. If it's not tuesday, then changing to include tuesdays doesn't increase what you calculate as the expected utility. If it is tuesday, then it's too late unless you have a decision theory that allows you to treat a change to optimality as a good idea no matter when you do it.

Comment author: 03 April 2012 11:36:32PM 2 points [-]

The problem is that the utility isn't constant. If you, today are indifferent to what happens on future Tuesdays, then you will also think it's a bad thing that your future self cares what happens on that Tuesday. You will therefore replace your current self with a different self that is indifferent to all future Tuesdays, including the ones that it's in, thus preserving the goal that you have today.

Comment author: 04 April 2012 12:28:16AM 0 points [-]

Good point. I have to remember not to confuse expected utility with future utility.