DanielLC comments on Total Utility is Illusionary - Less Wrong

0 Post author: PlatypusNinja 15 June 2014 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 15 June 2014 06:55:10AM 1 point [-]

Wouldn't the highest delta-U be to modify yourself so that you maximize the utility of people as they are right now, and ignore future people even after they're born?

Comment author: Manfred 15 June 2014 07:46:39AM 0 points [-]

Nope.

Comment author: DanielLC 15 June 2014 06:37:29PM 2 points [-]

Why not?

Let me try making this more explicit.

Alice has utility function A. Bob will have utility function B, but he hasn't been born yet.

You can make choices u or v, then once Bob is born, you get another choice between x and y.

A(u) = 1, A(v) = 0, A(x) = 1, A(y) = 0

B(u) = 0, B(v) = 2, B(x) = 0, B(y) = 2

If you can't precommit, you'll do u the first time, for 1 util under A, and y the second, for 2 util under A+B (compared to 1 util for x).

If you can precommit, then you know if you don't, you'll pick uy. Precommitting to ux gives you +1 util under A, and since you're still operating under A, that's what you'll do.

While I'm at it, you can also get into prisoner's dilemma with your future self, as follows:

A(u) = 1, A(v) = 0, A(x) = 2, A(y) = 0

B(u) = -1, B(v) = 2, B(x) = -2, B(y) = 1

Note that this gives:

A+B(u) = 0, A+B(v) = 2, A+B(x) = 0, A+B(y) = 1

Now, under A, you'd want u for 1 util, and once Bob is born, under A+B you'd want y for 1 util.

But if you instead took vx, that would be worth 2 util for A and 2 util for A+B. So vx is better than uv both from Alice's perspective and Alice+Bob's perspective. Certainly that would be a better option.

Comment author: Manfred 16 June 2014 01:08:49AM *  0 points [-]

Suppose we build a robot that takes a census of currently existing people, and a list of possible actions, and then takes the action that causes the biggest increase in utility of currently existing people.

You come to this robot before your example starts, and ask "Do you want to precommit to action vx, since that results in higher total utility?"

And the robot replies, "Does taking this action of precommitment cause the biggest increase in utility of currently existing people?"

"No, but you see, in one time step there's this Bob guy who'll pop into being, and if you add in his utilities from the beginning, by the end you'll wish you'd precommitted."

"Will wishing that I'd precommitted be the action that causes the biggest increase in utility of currently existing people?"

You shake your head. "No..."

"Then I can't really see why I'd do such a thing."

Comment author: DanielLC 16 June 2014 03:49:04AM 1 point [-]

And the robot replies, "Does taking this action of precommitment cause the biggest increase in utility of currently existing people?"

I'd say yes. It gives an additional 1 utility to currently existing people, since it ensures that the robot will make a choice that people like later on.

Are you only counting the amount they value the world as it currently is? For example, if someone wants to be buried when they die, the robot wouldn't arrange it, because by the time it happens they won't be in a state to appreciate it?

Comment author: Manfred 16 June 2014 04:09:11AM *  1 point [-]

Ooooh. Okay, I see what you mean now - for some reason I'd interpreted you as saying almost the opposite.

Yup, I was wrong.