GuySrinivasan comments on You cannot be mistaken about (not) wanting to wirehead - Less Wrong

34 Post author: Kaj_Sotala 26 January 2010 12:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (79)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 26 January 2010 04:03:03PM 8 points [-]

Rigorously, I think the argument doesn't stand up in its ultimate form. But it's tiptoing in the direction of a very interesting point on how to deal with changing utility functions, especially in circumstances where the changes might be predictable.

The simple answer is "judge everything in your future by your current utility function", but that doesn't seem satisfactory. Nor is "judge everything that occures in your future by your utility function at the time", because of lobotomies, addicting wireheading, and so on. Some people have utility functions that they expect will change; and the degree of change allowable may vary from person to person and subject to subject (eg, people opposed to polygamy may have a wide range of reactions to the announcement "in fifty years time, you will approve of polygamy"). Some people trust their own CEV; I never would, but I might trust it one level removed.

It's a difficult subject, and my upvote was in thanks of bringing it up. Susequent posts on the subject I'll judge more harshly.

Comment author: GuySrinivasan 26 January 2010 08:10:22PM 1 point [-]

I completely agree. The argument may be wrong but the point it raises, that sloppily assuming things about which possible causal continuations of self I care about, is important.

My initial reaction: we can still use our current utility function, but make sure the CEV analysis or whatever doesn't say "what would you want if you were more intelligentetc?" but instead "what would you want if you were changed in a way you currently want to be changed"?

This includes "what would you want if we found fixed points of iterated changes based on previous preferences", so that if I currently want to value paperclips more but don't care whether I value factories differently, but if upon modifying me to value paperclips more it turns out I would want to value factories more, then changing my preferences to value factories more is acceptable.

The part where I'm getting confused right now (rather, the part where I notice I'm getting confused :)) is that calculating fixed points almost certainly depends on the order of alteration, so that there are lots of different future-mes that I prefer to current-me that are at local maximums.

Also I have no idea how much we need to apply our current preferences to the fixed-point-mes. Not at all? 100%? Somehow something in-between? Or to the intermediate-state-mes.

Comment author: Stuart_Armstrong 27 January 2010 10:38:28AM 1 point [-]

I don't think the order issue is a big problem - there is not One Glowing Solution, we just need to find something nice and tolerable.

Also I have no idea how much we need to apply our current preferences to the fixed-point-mes. Not at all? 100%? Somehow something in-between? Or to the intermediate-state-mes.

That is the question.