momothefiddler comments on Delayed Gratification vs. a Time-Dependent Utility Function - Less Wrong

2 Post author: momothefiddler 06 May 2012 12:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 06 May 2012 02:14:04AM 0 points [-]

Does the utility function at the time of the choice have some sort of preferred status in the calculation

Yes, it does. Your present utility function may make reference to the utility functions of your future selves - eg, you want your future selves to be happy - but structurally speaking, present-day preferences about your future selves are the only way in which those other utility functions can bear on your decisions.

Comment author: momothefiddler 06 May 2012 02:35:26AM 0 points [-]

My utility function maximises (and think this is neither entirely nonsensical nor entirely trivial in the context) utilons. I want my future selves to be "happy", which is ill-defined.

I don't know how to say this precisely, but I want as many utilons as possible from as many future selves as possible. The problem arises when it appears that actively changing my future selves' utility functions to match their worlds is the best way to do that, but my current self recoils from the proposition. If I shut up and multiply, I get the opposite result that Eliezer does and I tend to trust his calculations more than my own.

Comment author: FeepingCreature 08 May 2012 12:31:33PM 0 points [-]

But surely you must have some constraints about what you consider future selves - some weighting function that prevents you from simply reducing yourself to a utilon-busybeaver.

Comment author: momothefiddler 08 May 2012 03:37:07PM *  0 points [-]

As far as I can tell, the only things that keep me from reducing myself to a utilon-busybeaver are a) insufficiently detailed information on the likelihoods of each potential future-me function, and b) an internally inconsistent utility function

What I'm addressing here is b) - my valuation of a universe composed entirely of minds that most-value a universe composed entirely of themselves is path-dependent. My initial reaction is that that universe is very negative on my current function, but I find it hard to believe that it's truly of larger magnitude than {number of minds}*{length of existence of this universe}*{number of utilons per mind}*{my personal utility of another mind's utilon}

Even for a very small positive value for the last (and it's definitely not negative or 0 - I'd need some justification to torture someone to death), the sheer scale of the other values should trivialize my personal preference that the universe include discovery and exploration.