XiXiDu comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 14 June 2011 09:15:11PM *  10 points [-]

ETA: This is a meta comment about some aspects of some comments on this post and what I perceive to be problems with the sort of communication/thinking that leads to the continued existence of those aspects. This comment is not meant to be taken as a critique of the original post.

ETA2: This comment lacks enough concreteness to act as a serious consideration in favor of one policy over another. Please disregard it as a suggestion for how LW should normatively respond to something. Instead one might consider if one might personally benefit from enacting a policy I might be suggesting, on an individual basis.


Why are people on Less Wrong still talking about 'their' 'values' using deviations from a model that assumes they have a 'utility function'? It's not enough to explicitly believe and disclaim that this is obviously an incorrect model, at some point you have to actually stop using the model and adopt something else. People are godshatter, they are incoherent, they are inconsistent, they are an abstraction, they are confused about morality, their revealed preferences aren't their preferences, their revealed preferences aren't even their revealed preferences, their verbally expressed preferences aren't even preferences, the beliefs of parts of them about the preferences of other parts of them aren't their preferences, the beliefs of parts of them aren't even beliefs, preferences aren't morality, predisposition isn't justification, et cetera...

Can we please avoid using the concept of a human "utility function" even as an abstraction, unless it obviously makes sense to do so? If you're specific enough and careful enough it can work out okay (e.g. see JenniferRM's comment) but generally it is just a bad idea. Am I wrong to think this is both obviously and non-obviously misleading in a multitude of ways?

Comment author: XiXiDu 15 June 2011 10:07:41AM 0 points [-]

I know what I want based on naive introspection. If you want to have preferences other than those based on naive introspection, then one of your preferences, based on naive introspection, is not to have preferences that are based on naive introspection. I am not sure how you think you could ever get around intuition, can you please elaborate?

Comment author: Will_Newsome 16 June 2011 07:18:50AM -1 points [-]

Naive introspection is an epistemic process; it's one kind of algorithm you can run to figure out aspects of the world, in this case your mind. Because it's an epistemic process we know that there are many, many ways it can be suboptimal. (Cognitive biases come to mind, of course; Robin Hanson writes a lot about how naive introspection and actual reasons are very divergent. But sheer boundedness is also a consideration; we're just not very good Bayesians.) Thus, when you say "one of your preferences, based on naive introspection, is not to have preferences that are based on naive introspection," I think:

If my values are what I think they are,
I desire to believe that my values are what I think they are;
If my values aren't what I think they are,
I desire to believe that my values aren't what I think they are;
Let me not become attached to values that may not be.