endoself comments on St. Petersburg Mugging Implies You Have Bounded Utility - Less Wrong

10 Post author: TimFreeman 07 June 2011 03:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (163)

You are viewing a single comment's thread. Show more comments above.

Comment author: endoself 14 June 2011 02:40:23AM *  1 point [-]

I don't think there's any way to avoid probabilities of zero. Even the Solomonoff universal prior assigns zero probability to uncomputable hypotheses.

There are people working on this. EY explained his position here.

However, that is somewhat tangential. Are you proposing that decision making should be significantly altered by ignoring certain computable hypotheses - since Solomonoff induction, despite its limits, does manifest this problem - in order to make utility functions converge? That sounds horribly ad-hoc (see second paragraph of this).

In large-world reasoning, it seems to me that an outcome cannot be anything less than the entire history of one's forward light-cone, or in TDT something even larger. Those are the things you are choosing between, when you make a choice.

I agree.

Decision theory on that scale is very much a work in progress, which I'm not going to scoff at, but I have low expectations of AGI being developed on that basis.

Any decision process that does not explicitly mention outcomes is only useful insofar as its outputs are correlated with our actual desires, which are about outcomes. If outcomes are not part of an AGI's decision process, they are therefore still necessary for the design of the AGI. They are probably also necessary for the AGI to know which self-modifications are justified, since we cannot foresee which modifications could at some point be considered.

Comment author: RichardKennaway 14 June 2011 11:57:34AM 2 points [-]

Are you proposing that decision making should be significantly altered by ignoring certain computable hypotheses - since Solomonoff induction, despite its limits, does manifest this problem - in order to make utility functions converge? That sounds horribly ad-hoc (see second paragraph of this).

If I was working on that, I could say it was being worked on. I agree that an ad-hoc hack is not what's called for. It needs to be a principled hack. :-)

Any decision process that does not explicitly mention outcomes is only useful insofar as its outputs are correlated with our actual desires, which are about outcomes.

Are they really? That is, about outcomes in the large-world sense we just agreed on. Ask people what they want, and few will talk about the entire future history of the universe, even if you press them to go farther than what they want right now. I'm sure Eliezer would, and others operating in that sphere of thought, including many on LessWrong, but that is a rather limited sense of "us".

Comment author: endoself 15 June 2011 06:54:17PM 0 points [-]

It needs to be a principled hack. :-)

Can you come up with a historical example of a mathematical or scientific problem being solved - not made to work for some specific purpose, but solved completely - with a principled hack?

I'm sure Eliezer would, and others operating in that sphere of thought, including many on LessWrong, but that is a rather limited sense of "us".

I don't see your point. Other people don't care about outcomes but a) their extrapolated volitions probably do and b) if people's extrapolated volitions don't care about outcomes, I don't think I'd want to use them as the basis of a FAI.

Comment author: RichardKennaway 15 June 2011 08:25:23PM 2 points [-]

Can you come up with a historical example of a mathematical or scientific problem being solved - not made to work for some specific purpose, but solved completely - with a principled hack?

Limited comprehension in ZF set theory is the example I had in mind in coining the term "principled hack". Russell said to Frege, "what about the set of sets not members of themselves?", whereupon Frege was embarrassed, and eventually a way was found of limiting self-reference enough to avoid the contradiction. There's a principle there -- unrestricted self-reference can't be done -- but all the methods of limiting self-reference that have yet been devised look like hacks. They work, though. ZF appears to be consistent, and all of mathematics can be expressed in it. As a universal language, it completely solves the problem of formalising mathematics.

(I am aware that there are mathematicians who would disagree with that triumphalist claim, but as far as I know none of them are mainstream.)

Comment author: [deleted] 19 June 2011 10:32:06PM 1 point [-]

Being a mathematician who at least considers himself mainstream, I would think that ZFC and the existence of a large cardinal is probably the minimum one would need to express a reasonable fragment of mathematics.

If you can't talk about the set of all subsets of the set of all subsets of the real numbers, I think analysis would become a bit... bondage and discipline.

Comment author: RichardKennaway 20 June 2011 08:13:24AM 0 points [-]

If you can't talk about the set of all subsets of the set of all subsets of the real numbers

Surely the power set axiom gets you that?

Comment author: [deleted] 20 June 2011 11:01:31AM 0 points [-]

That it exists, yes. But what good is that without choice?

Comment author: RichardKennaway 20 June 2011 11:28:32AM *  0 points [-]

Ok, ZFC is a more convenient background theory than ZF (although I'm not sure where it becomes awkward to do without choice). That's still short of needing large cardinal axioms.

Comment author: endoself 19 June 2011 09:44:08PM 0 points [-]

The idea of programming ZF into an AGI horrifies my aesthetics, but that is no reason not to use it (well it is an indication that it might not be a good idea but in this specific case ZF does have the evidence on its side). If expected utility, or anything else necessary for an AGI, could benefit from a principled hack as well-tested as limited comprehension, I would accept it.