wedrifid comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 19 June 2011 06:38:56PM 0 points [-]

Why are people on Less Wrong still talking about 'their' 'values' using deviations from a model that assumes they have a 'utility function'?

Because rational agents care about whatever the hell they want to care about. I, personally, choose to care about my abstract 'utility function' with the clear implication that said utility function is something that must be messily constructed from godshatter preferences. And that's ok because it is what I want to want.

Can we please avoid using the concept of a human "utility function" even as an abstraction

No. It is a useful abstraction. Not using utility function measures does not appear to improve abstract decision making processes. I'm going to stick with it.

Comment author: Will_Newsome 20 June 2011 09:20:29AM -2 points [-]

Eliezer's original quote was better. Wasn't it about superintelligences? Anyway you are not a superintelligence or a rational agent and therefore have not yet earned the right to want to want whatever you think you want to want. Then again I don't have the right to deny rights so whatever.

Comment author: wedrifid 20 June 2011 05:51:35PM *  -1 points [-]

Eliezer's original quote was better. Wasn't it about superintelligences?

I wasn't quoting Eliezer, I made (and stand by) a plain English claim. It does happen to be a similar in form to a recent instance of Eliezer summarily rejecting PhilGoetz declaration that rationalists don't care about the future. That quote from Eliezer was about "expected-utility-maximising agents" which would make the quote rather inappropriate in the context.

I will actually strengthen my declaration to:

Because agents can care about whatever the hell they want to care about. (This too should be uncontroversial.)

Anyway you are not a superintelligence or a rational agent and therefore have not yet earned the right to want to want whatever you think you want to want.

An agent does not determine its preferences by mere vocalisation and nor does its belief about its preference intrinsically make them so. Nevertheless I do care about my utility function (with the vaguely specified caveats). If you could suggest a formalization sufficiently useful for decision making that I could care about it even more than my utility function then I would do so. But you cannot.

Then again I don't have the right to deny rights so whatever.

No, you don't. The only way you could apply limits on what I want is via physically altering my molecular makeup. As well as being rather difficult for you to do on any significant scale I could credibly claim that the new physical configuration you constructed from my atoms is other than 'me'. You can't get much more of a fundamental destruction of identity than by changing what an agent wants.

I don't object to you declaring that you don't have or don't want to have a utility function. That's your problem not mine. But I will certainly object to any interventions made that deny that others may have them.