ThrustVectoring comments on Pinpointing Utility - Less Wrong

57 [deleted] 01 February 2013 03:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 02 February 2013 06:24:48AM *  0 points [-]

experiences utility

Ooops. Radiation poisoning. Utility is about planning, not experiencing or enjoying.

What this lets us do is convert "there's a chance I get turned into a whale and I'm not sure if I will like it" into "there's a chance that I get turned into a whale and like it, and another chance that I get turned into a whale and don't like it".

I went through the math a couple days ago with another smart philosopher-type. We are pretty sure that this (adding preference uncertainty as an additional dimension of your ontology) is a fully general solution to preference uncertainty. Unfortunately, it requires a bit of moral philosophy to pin down the relative weights of the utility functions. That is, the utility functions and their respective probabilities is not enough to uniquely identify the combined utility function. Which is actually totally ok, because you can get that information from the same source where you got the partial utility functions.

I'll go through the proof and implications/discussion in an upcoming post. Hopefully. I don't exactly have a track record of following through on things...

Comment author: ThrustVectoring 02 February 2013 07:38:09PM *  0 points [-]

Nice catch on the radiation poisoning. Revised sentence:

I think that every uncertainty about a utility function is just a hidden uncertainty about how to weigh the different experiences that generate a utility function

Also

That is, the utility functions and their respective probabilities is not enough to uniquely identify the combined utility function.

This is 100% expected, since utility functions that vary merely by a scaling factor and changing the zero point are equivalent.

I think we're talking about the same thing when you say "adding preference uncertainty as an additional dimension of your ontology". It's kind of hard to tell at this level of abstraction.