nyan_sandwich comments on Pinpointing Utility - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (154)
Thanks nyan, this was really helpful in comprehending what you told me last time. So if I understand you correctly, utilities are both subjective and descriptive. They only identify what a particular single agent actually prefers under uncertain conditions. Is this right? If so, how do we take into account situations where one is not sure what one wants? Being turned into a whale might be as awesome as being turned into a gryphon, but since you don't (presumably) know what either would be like, how do you calculate your expected payoff?
Can you link me to or in some way dereference "what I told you last time"?
If you have a probability distribution over possible utility values or something, I don't know what to do with it. It's a type error to aggregate utilities from different utility functions, so don't do that. That's the moral uncertainty problem, and I don't think there's a satisfactory solution yet. Though Bostrom or someone might have done some good work on it that I haven't seen.
For now, it probably works to guess at how good it seems relative to other things. Sometimes breaking it down into a more detailed scenario helps, looking at it a few different ways, etc. Fundamentally though, I don't know. Maximizing EU without a real utility function is hard. Moral philosophy is hard.
My bad, nyan.
You were explaining to me the difference between utility in Decision theory and utility in utilitarianism. I will try to find the thread later.
Thanks.