multifoliaterose comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: multifoliaterose 14 June 2011 07:52:23PM 3 points [-]

Thanks for your thoughtful comment.

I agree that it's unclear that it makes sense to talk about humans having utility functions; my use of the term was more a manner of speaking than anything else.

It sounds like you're going with something like Counterargument #5 with something like the Dunbar number determining the point at which your concern for others caps off; this augmented by some desire to "be a good citizen n'stuff".

Something similar may be true of me, but I'm not sure. I know that I derive a lot of satisfaction from feeling like I'm making the world a better place and am uncomfortable with the idea that I don't care about people who I don't know (in light of my abstract belief in space and time independence of moral value); but maybe the intensity of the relevant feelings are sufficiently diminished when the magnitude of uncertainty becomes huge so that other interests predominate.

I feel like if I could prove that course X maximizes expected utility then my interest in pursuing course X would increase dramatically (independently of how small the probabilities are and of the possibility of doing more harm than good) but that having a distinctive sense that I'll probably change my mind about whether pursuing course X was a good idea significantly decreases my interest in pursuing course X. Finding it difficult to determine whether this reflects my "utility function" or whether there's a logical argument coming from utilitarianism against pursuing courses that one will probably regret (e.g. probable burnout and disillusionment repelling potentially utilitarian bystanders).

Great Adam Smith quotation; I've seen it before, but it's good to have a reference.

Comment author: CarlShulman 14 June 2011 10:26:19PM *  5 points [-]

Obligatory OB link: Bostrom and Ord's parliamentary model for normative uncertainty/mixed motivations.

Comment author: timtyler 14 June 2011 10:34:37PM *  1 point [-]

I agree that it's unclear that it makes sense to talk about humans having utility functions; my use of the term was more a manner of speaking than anything else.

They do have them - in this sense:

It would be convenient if we could show that all O-maximizers have some characteristic behavior pattern, as we do with reward maximizers in Appendix B. We cannot do this, though, because the set of O-maximizers coincides with the set of all agents; any agents can be written in O-maximizer form. To prove this, consider an agent A whose behavior is speci ed by yk = A(yx<k). Trivially, we can construct an O-maximizer whose utility is 1 if each yn in its interaction history is equal to A(yx<n), and 0 otherwise. This O-maximizer will maximize its utility by behaving as A does at every time n. In this way, any agent can be rewritten as an O-maximizer.