TimFreeman comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: TimFreeman 16 June 2011 02:35:23PM 3 points [-]

Why are people on Less Wrong still talking about 'their' 'values' using deviations from a model that assumes they have a 'utility function'? It's not enough to explicitly believe and disclaim that this is obviously an incorrect model, at some point you have to actually stop using the model and adopt something else. People are godshatter, they are incoherent, they are inconsistent, they are an abstraction, they are confused about morality, their revealed preferences aren't their preferences, their revealed preferences aren't even their revealed preferences, their verbally expressed preferences aren't even preferences, the beliefs of parts of them about the preferences of other parts of them aren't their preferences, the beliefs of parts of them aren't even beliefs, preferences aren't morality, predisposition isn't justification, et cetera...

We might make something someday that isn't godshatter, and we need to practice.

I agree that reforming humans to be rational is hopeless, but it is nevertheless useful to imagine how a rational being would deal with things.

Comment author: jsteinhardt 16 June 2011 10:46:33PM 0 points [-]

But VNM utility is just one particularly unintuitive property of rational agents. (For instance, I would never ever use a utility function to represent the values of an AGI.) Surely we can talk about rational agents in other ways that are not so confusing?

Also, I don't think VNM utility takes into account things like bounded computational resources, although I could be wrong. Either way, just because something is mathematically proven to exist doesn't mean that we should have to use it.

Comment author: TimFreeman 17 June 2011 10:23:09PM 0 points [-]

Surely we can talk about rational agents in other ways that are not so confusing?

Who is sure? If you're saying that, I hope you are. What do you propose?

Either way, just because something is mathematically proven to exist doesn't mean that we should have to use it.

I don't think anybody advocated what you're arguing against there.

The nearest thing I'm willing to argue for is that one of the following possibilities hold:

  • We use something that has been mathematically proven to exist, now.

  • We might be speaking nonsense, depending on whether the concepts we're using can be mathematically proven to make sense in the future.

Comment author: timtyler 16 June 2011 09:59:20PM -1 points [-]

Since even irrational agents can be modelled using a utility function, no "reforming" is needed.

Comment author: jsteinhardt 16 June 2011 10:43:25PM 1 point [-]

How can they be modeled with a utility function?

Comment author: timtyler 17 June 2011 07:08:43AM 2 points [-]

As explained here:

Any agent can be expressed as an O-maximizer (as we show in Section 3.1)

Comment author: jsteinhardt 17 June 2011 08:59:02PM 1 point [-]

Thanks for the reference.

It seems though that the reward function might be extremely complicated in general (in fact I suspect that this paper can be used to show that the reward function can be potentially uncomputable).

Comment author: timtyler 17 June 2011 09:50:16PM 0 points [-]

The whole universe may well be computable - according to the Church–Turing–Deutsch principle. If it isn't the above analysis may not apply.

Comment author: TimFreeman 17 June 2011 10:33:28PM 0 points [-]

I agree with jsteinhardt, thanks for the reference.

I agree that the reward functions will vary in complexity. If you do the usual thing in Solomonoff induction, where the plausibility of a reward function decreases exponentially with its size, so far as I can tell you can infer reward fuctions from behavior, if you can infer behavior.

We need to infer a utility function for somebody if we're going to help them get what they want, since a utility function is the only reasonable description I know of what an agent wants.