wedrifid comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (154)
ETA: This is a meta comment about some aspects of some comments on this post and what I perceive to be problems with the sort of communication/thinking that leads to the continued existence of those aspects. This comment is not meant to be taken as a critique of the original post.
ETA2: This comment lacks enough concreteness to act as a serious consideration in favor of one policy over another. Please disregard it as a suggestion for how LW should normatively respond to something. Instead one might consider if one might personally benefit from enacting a policy I might be suggesting, on an individual basis.
Why are people on Less Wrong still talking about 'their' 'values' using deviations from a model that assumes they have a 'utility function'? It's not enough to explicitly believe and disclaim that this is obviously an incorrect model, at some point you have to actually stop using the model and adopt something else. People are godshatter, they are incoherent, they are inconsistent, they are an abstraction, they are confused about morality, their revealed preferences aren't their preferences, their revealed preferences aren't even their revealed preferences, their verbally expressed preferences aren't even preferences, the beliefs of parts of them about the preferences of other parts of them aren't their preferences, the beliefs of parts of them aren't even beliefs, preferences aren't morality, predisposition isn't justification, et cetera...
Can we please avoid using the concept of a human "utility function" even as an abstraction, unless it obviously makes sense to do so? If you're specific enough and careful enough it can work out okay (e.g. see JenniferRM's comment) but generally it is just a bad idea. Am I wrong to think this is both obviously and non-obviously misleading in a multitude of ways?
Because rational agents care about whatever the hell they want to care about. I, personally, choose to care about my abstract 'utility function' with the clear implication that said utility function is something that must be messily constructed from godshatter preferences. And that's ok because it is what I want to want.
No. It is a useful abstraction. Not using utility function measures does not appear to improve abstract decision making processes. I'm going to stick with it.
Eliezer's original quote was better. Wasn't it about superintelligences? Anyway you are not a superintelligence or a rational agent and therefore have not yet earned the right to want to want whatever you think you want to want. Then again I don't have the right to deny rights so whatever.
I wasn't quoting Eliezer, I made (and stand by) a plain English claim. It does happen to be a similar in form to a recent instance of Eliezer summarily rejecting PhilGoetz declaration that rationalists don't care about the future. That quote from Eliezer was about "expected-utility-maximising agents" which would make the quote rather inappropriate in the context.
I will actually strengthen my declaration to:
Because agents can care about whatever the hell they want to care about. (This too should be uncontroversial.)
An agent does not determine its preferences by mere vocalisation and nor does its belief about its preference intrinsically make them so. Nevertheless I do care about my utility function (with the vaguely specified caveats). If you could suggest a formalization sufficiently useful for decision making that I could care about it even more than my utility function then I would do so. But you cannot.
No, you don't. The only way you could apply limits on what I want is via physically altering my molecular makeup. As well as being rather difficult for you to do on any significant scale I could credibly claim that the new physical configuration you constructed from my atoms is other than 'me'. You can't get much more of a fundamental destruction of identity than by changing what an agent wants.
I don't object to you declaring that you don't have or don't want to have a utility function. That's your problem not mine. But I will certainly object to any interventions made that deny that others may have them.