entirelyuseless comments on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging - Less Wrong

20 Post author: Kaj_Sotala 16 September 2015 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (176)

You are viewing a single comment's thread. Show more comments above.

Comment author: entirelyuseless 18 September 2015 07:55:36PM 1 point [-]

I agree that bounded utility implies that utility is not linear in human lives or in other similar matters.

But I have two problems with saying that we should try to get this property. First of all, no one in real life actually acts like it is linear. That's why we talk about scope insensitivity, because people don't treat it as linear. That suggests that people's real utility functions, insofar as there are such things, are bounded.

Second, I think it won't be possible to have a logically coherent set of preferences if you do that (at least combined with your proposal), namely because you will lose the independence property.

Comment author: Kaj_Sotala 19 September 2015 05:12:50PM 0 points [-]

I agree that, insofar as people have something like utility functions, those are probably bounded. But I don't think that an AI's utility function should have the same properties as my utility function, or for that matter the same properties as the utility function of any human. I wouldn't want the AI to discount the well-being of me or my close ones simply because a billion other people are already doing pretty well.

Though ironically given my answer to your first point, I'm somewhat unconcerned by your second point, because humans probably don't have coherent preferences either, and still seem to do fine. My hunch is that rather than trying to make your preferences perfectly coherent, one is better off making a system for detecting sets of circular trades and similar exploits as they happen, and then making local adjustments to fix that particular inconsistency.