RichardKennaway comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong

10 Post author: TimFreeman 07 June 2011 03:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (163)

You are viewing a single comment's thread. Show more comments above.

Comment author: RichardKennaway 07 June 2011 07:28:28PM 0 points [-]

So how do you determine which claims you are giving a prior probability of zero and which you don't?

For Pascal's Mugging scenarios it just seems a reasonable thing to do. Gigantic promises undermine their own credibility, converging to zero in the limit. I don't have a formally expressed rule, but if I was going to work on decision theory I'd look into the possibility of codifying that intuition as an axiom.

Comment author: endoself 08 June 2011 01:03:34AM *  2 points [-]

Yes, assigning GOD(infinity) a probability of zero means that no finite amount of evidence will shift that. For this particular infinite claim I don't see a problem with that.

What if we came up with a well-evidenced theory of everything that implied GOD(infinity)?

So how do you determine which claims you are giving a prior probability of zero and which you don't?

For Pascal's Mugging scenarios it just seems a reasonable thing to do.

It's not just contrived scenarios; see http://arxiv.org/abs/0712.4318. If utility is unbounded, infinitely many hypotheses can result in utility higher than N for any N.

Unless you can actually do these things -- actually reconstruct probability theory in a way that makes P(A|B) and P(~A|B) sum to less than 1, and prohibit uncountable measure spaces -- then claiming that you should do them anyway is to make the real insight of Eliezer's article into an empty slogan.

How is this any different than saying "until you can actually make unbounded utility functions converge properly as discussed in Peter de Blanc's paper, using expected utility maximization is an empty slogan"?

Comment author: RichardKennaway 08 June 2011 05:54:01AM *  1 point [-]

How is this any different than saying "until you can actually make unbounded utility functions converge properly as discussed in Peter de Blanc's paper, using expected utility maximization is an empty slogan"?

I'm not convinced by expected utility maximization either, and I can see various possibilities of ways around de Blanc's argument besides bounding utility, but those are whole nother questions.

ETA: Also, if someone claims their utility function is bounded, does that mean they're attaching probability zero to it being unbounded? If they attach non-zero probability, they run into de Blanc's argument, and if they attach zero, they've just used zero as a probability. Or is having a probability distribution over what one's utility function actually is too self-referential? But if you can't do that, how can you model uncertainty about what your utility function is?

Comment author: endoself 08 June 2011 03:09:54PM *  0 points [-]

I'm not convinced by expected utility maximization either,

Do you reject the VNM axioms? I have my own quibbles with them - I don't like they way they just assume that probability exists and is a real number and I don't like axiom 3 because it rules out unbounded utility functions - but they do apply in some contexts.

I can see various possibilities of ways around de Blanc's argument besides bounding utility, but those are whole nother questions.

Can you elaborate on these?

how can you model uncertainty about what your utility function is?

There is no good theory of this yet. One wild speculation is to model each possible utility function as a separate agent and have them come to an agreement. Unfortunately, there is no good theory of bargaining yet either.

Comment author: RichardKennaway 09 June 2011 07:56:33PM *  1 point [-]

I can see various possibilities of ways around de Blanc's argument besides bounding utility, but those are whole nother questions.

Can you elaborate on these?

Not with any great weight, it's just a matter of looking at each hypothesis and thinking up a way of making it fail.

Maybe utility isn't bounded below by a computable function (and a fortiori is not itself computable). That might be unfortunate for the would-be utility maximizer, but if that's the way it is, too bad.

Or -- this is a possibility that de Blanc himself mentions in the 2009 version -- maybe the environment should not be allowed to range over all computable functions. That seems quite a strong possibility to me. Known physical bounds on the density of information processing would appear to require it. Of course, those bounds apply equally to the utility function, which might open the way for a complexity-bounded version of the proof of bounded utility.

Comment author: endoself 11 June 2011 11:39:11PM 0 points [-]

Maybe utility isn't bounded below by a computable function (and a fortiori is not itself computable). That might be unfortunate for the would-be utility maximizer, but if that's the way it is, too bad.

Good point, but I find it unlikely.

Or -- this is a possibility that de Blanc himself mentions in the 2009 version -- maybe the environment should not be allowed to range over all computable functions. That seems quite a strong possibility to me. Known physical bounds on the density of information processing would appear to require it.

This requires assigning zero probability to the hypothesis that there is no limit on the density of information processing.

Comment author: RichardKennaway 09 June 2011 09:41:52AM *  0 points [-]

I'm not convinced by expected utility maximization either,

Do you reject the VNM axioms?

I don't see any reason to dispute Axioms 2 (transitivity) and 4 (independence of alternatives), although I know some people dispute Axiom 4.

For Axiom 3 (continuity), I don't have an argument against, but it feels a bit dodgy to me. The lack of inferential distance between the construction of lotteries and the conclusion of the theorem gives me the impression of begging the question. But that isn't my main problem with the axioms.

The sticking point for me is Axiom 1, the totality of the preference relation. Why should an ideal rational agent, whatever that is, have a preference -- even one of indifference -- between every possible pair of alternatives?

"An ideal rational agent, whatever that is." Does the concept of an ideal rational agent make sense, even as an idealisation? An ideal rational agent, as described by the VNM axioms, cannot change its utility function. It cannot change its ultimate priors. These are simply what they are and define that agent. It is logically omniscient and can compute anything computable in constant time. What is this concept useful for?

It's the small world/large world issue again. In small situations, such as industrial process control, that are readily posed as optimisation problems, the VNM axioms are trivially true. This is what gives them their plausibility. In large situations, constructing a universal utility function is as hard a problem as constructing a universal prior.

Comment author: endoself 09 June 2011 11:59:55PM 0 points [-]

The sticking point for me is Axiom 1, the totality of the preference relation. Why should an ideal rational agent, whatever that is, have a preference -- even one of indifference -- between every possible pair of alternatives?

How would it act if asked to choose between two options that it does not have a preference between?

An ideal rational agent, as described by the VNM axioms, cannot change its utility function. It cannot change its ultimate priors.

It can, it just would not want to, ceteris paribus.

What is this concept useful for?

It is a starting point (well, a middle point). I see no reason to change my utility function or my priors; I do not desire those almost by definition. Infinite computational ability is an approximation to be correct in the future, as is, IMO, VNM axiom 3. This is what we have so far and we are working on improving it.

Comment author: RichardKennaway 10 June 2011 10:36:07AM *  1 point [-]

How would it act if asked to choose between two options that it does not have a preference between?

The point is that there will be options that it could never be asked to choose between.

What is this concept useful for?

It is a starting point (well, a middle point).

I become less and less convinced that utility maximisation is a useful place to start. An ideal rational agent must be an idealisation of real, imperfectly rational agents -- of us, that is. What can I do with a preference between steak and ice cream? Sometimes one of those will satisfy a purpose for me and sometimes the other; most of the time neither is in my awareness at all. I do not need to have a preference, even between such everyday things, because I will never be faced with a choice between them. So I find the idea of a universal preference uncompelling.

When faced with practical trolley problems, the practical rational first response is not to weigh the two offered courses of action, but to look for other alternatives. They don't always exist, but they have to be looked for. Hard-core Bayesian utility maximisation requires a universal prior that automatically thinks of all possible alternatives. I am not yet persuaded (e.g. by AIXI) that a practical implementation of such a prior is possible.

Comment author: endoself 12 June 2011 12:28:58AM *  1 point [-]

How would it act if asked to choose between two options that it does not have a preference between?

The point is that there will be options that it could never be asked to choose between.

Does this involve probabilities of zero or just ignoring sufficiently unlikely events?

What can I do with a preference between steak and ice cream? Sometimes one of those will satisfy a purpose for me and sometimes the other; most of the time neither is in my awareness at all. I do not need to have a preference, even between such everyday things, because I will never be faced with a choice between them.

I'm not sure I understand this; is this a choice between objects or between outcomes? If it is between outcomes, it can occur. If it is between objects, it is not the kind of thing described by the frameworks that we are discussing since it is not actually a choice that anyone makes; one may choose for an object to existed or to be possessed, but it is a category error to choose an object (though that phrase can be used as a shorthand for a different type of choice, I think it is clear what it means).

Comment author: RichardKennaway 13 June 2011 08:20:12AM *  1 point [-]

Does this involve probabilities of zero or just ignoring sufficiently unlikely events?

I don't think there's any way to avoid probabilities of zero. Even the Solomonoff universal prior assigns zero probability to uncomputable hypotheses. And you never have probabilities at the meta-level, which is always conducted in the language of plain old logic.

What can I do with a preference between steak and ice cream? ...

I'm not sure I understand this; is this a choice between objects or between outcomes? If it is between outcomes, it can occur.

Between outcomes. How is this choice going to occur?

More generally, what is an outcome? In large-world reasoning, it seems to me that an outcome cannot be anything less than the entire history of one's forward light-cone, or in TDT something even larger. Those are the things you are choosing between, when you make a choice. Decision theory on that scale is very much a work in progress, which I'm not going to scoff at, but I have low expectations of AGI being developed on that basis.

Comment author: endoself 14 June 2011 02:40:23AM *  1 point [-]

I don't think there's any way to avoid probabilities of zero. Even the Solomonoff universal prior assigns zero probability to uncomputable hypotheses.

There are people working on this. EY explained his position here.

However, that is somewhat tangential. Are you proposing that decision making should be significantly altered by ignoring certain computable hypotheses - since Solomonoff induction, despite its limits, does manifest this problem - in order to make utility functions converge? That sounds horribly ad-hoc (see second paragraph of this).

In large-world reasoning, it seems to me that an outcome cannot be anything less than the entire history of one's forward light-cone, or in TDT something even larger. Those are the things you are choosing between, when you make a choice.

I agree.

Decision theory on that scale is very much a work in progress, which I'm not going to scoff at, but I have low expectations of AGI being developed on that basis.

Any decision process that does not explicitly mention outcomes is only useful insofar as its outputs are correlated with our actual desires, which are about outcomes. If outcomes are not part of an AGI's decision process, they are therefore still necessary for the design of the AGI. They are probably also necessary for the AGI to know which self-modifications are justified, since we cannot foresee which modifications could at some point be considered.