XiXiDu comments on Real-life expected utility maximization [response to XiXiDu] - Less Wrong

8 Post author: Gabriel 12 March 2012 07:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: XiXiDu 12 March 2012 08:37:12PM 1 point [-]

I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.

I think the best single-sentence answer is: don't.

(Quick comment before I go offline for today.)

Here is the problem. If I use expected utility maximization (EU) on big and unintuitive problems like existential risks and to decide what I should do about it; If I use EU to decide how to organize my life by and large; If I use EU to decide to pursue a terminal goal but then stop using it to decide what goals are instrumental in achieving the desired outcome, then how does it help to use EU at all? And otherwise, how do I decide where to draw the line?

People closely associated with SIAI/LW do use EU in support of their overall goals, yet ignore EU when it comes to flying to NY or writing a book about rationality:

[S]uppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do.

-- Nick Bostrom

If you want to maximize your marginal expected utility you have to maximize on your choice of problem over the combination of high impact, high variance, possible points of leverage, and few other people working on it. The problem of stable goal systems in self-improving Artificial Intelligence has no realistic competitors under any three of these criteria, let alone all four.

-- Eliezer Yudkowsky

In terms of expected utility maximization, even large probabilities of jumping the interval between a universe-history in which 95% of existing biological species survive Earth’s 21st century, versus a universe-history where 80% of species survive, are just about impossible to trade off against tiny probabilities of jumping the interval between interesting universe-histories, versus boring ones where intelligent life goes extinct, or the wrong sort of AI self-improves....with millions of people working on environmentalism, and major existential risks that are completely ignored… if you add a marginal resource that can, rarely, be steered by expected utilities instead of warm glows, devoting that resource to environmentalism does not make sense.

-- Eliezer Yudkowsky

Comment author: Gabriel 14 March 2012 01:38:04AM 1 point [-]

Here is the problem. If I use expected utility maximization (EU) on big and unintuitive problems like existential risks and to decide what I should do about it; If I use EU to decide how to organize my life by and large; If I use EU to decide to pursue a terminal goal but then stop using it to decide what goals are instrumental in achieving the desired outcome, then how does it help to use EU at all? And otherwise, how do I decide where to draw the line?

You can't be perfect but that doesn't mean that you can't do better. It also doesn't mean that you can do better. Maybe thinking about all this rationality business is pretty useless after all. But complaining that you can't perfectly apply expected utility is not a good argument for that.

People closely associated with SIAI/LW do use EU in support of their overall goals, yet ignore EU when it comes to flying to NY or writing a book about rationality:

They don't use EU in the sense of coming up with a big complicated model, plugging probabilities into it and then concluding "gee, option A has 13.743% larger expected utility than option B; A it is." I think they reasoned qualitatively and arrived at the conclusion that some subset of actions has much greater potential impact than others. You don't have to do precise calculations when comparing a mountain with a pebble. The references to expected utility made in those quotes don't read to me like claims that all the beliefs were arrived at using formal mathematical methods but rather a method to remind people of the counterintuitive fact that the magnitudes of outcomes should affect your decision.

It's unreasonable to say that unless you are a perfect reasoner yourself, you should never talk about the theoretical principles underlying perfect reasoning even when faced with simple situations where those principles can be applied trivially. Again, it can be argued that the decision to direct effort at existential risk mitigation isn't as overdetermined as it is claimed and so you should make some calculations before talking about expected utility in that context but it can't be argued by pointing out that Yudkowsky doesn't calculate the expected utility of plane trips.

Comment author: Dmytry 15 March 2012 11:12:25AM *  0 points [-]

TBH i don't see how EU is being used with regards to the friendly AI.

The arguments are so much based on pure guessing, that their external probabilities are very low, and the differences in the utilities really could be so low that someone could conceivably say 'I wouldn't give up $1 of mine to provide $1 million for an attempt to mitigate risk of UFAI, even if you argue that UFAI tortures every possible human mind-state'. [note: i presume literal $, not resources, so the global utility of creation of 1 million $ is zero]

The only way EU comes into play is the appeal to the purely intuitive feeling we get, that the efficacy of the FAI effort can't possibly be so low as to degrade such giant utility to the trivial level of "should i chew gum or not", or even unimaginably less than that. Unfortunately, though, it can. The AI design space is multi-dimensional and very huge. The intuitive feeling may be correct, or may be entirely wrong. There's a lot of fallacies - being graded for effort in education contributes to one, the just world fallacy contributes to another - which may throw the intuitive feeling way off.

Comment author: Viliam_Bur 13 March 2012 11:29:33AM *  0 points [-]

If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do.

By the same logic, birth control (of any kind, including simply not having sex) is like a murder -- removing an individual from the next generation is like removing an individual from this generation, right? If you know that your children would on average have lives worth living, and yet you refuse to reproduce as much as possible, you are a very bad person!

Or maybe there is a difference between killing an individual that exists, and not creating another hypothetical individual. In this sense, existential risk is bad because it kills all individuals existing at the time of the disaster, but the following hypothetical generations are irrelevant.

I am not sure what exactly is my position on this topic -- I feel that not having as many children as possible is not a crime, but the extinction of humanity (by any means, including all existing people deciding to abstain from reproduction) would be a huge loss. And I am not sure where to draw the line, also because I cannot estimate effects of e.g. doubling or halving the planet population. It probably depends on many other things, for example more people could do more science and improve their lives, but also could fight for scarcer resources, making their lives worse, and this fight and poverty could even prevent the science.

Perhaps in some sense, not having as many children as possible today is like a murder, but if it allows higher living standards, less wars, more science, etc., then it is just a sacrifice of the few for the benefit of many in the post-Singularity future, so... shut up and multiply (not biologically, heh), but this seems like a very dangerous line of thought.

Comment author: Nisan 13 March 2012 08:07:36PM 0 points [-]

I lean towards maybe having a parliamentary model of my preferences (that's the term Bostrom uses, but I'm not sure I'd use his decision theory, exactly) in which one voting bloc cares about the people who are still alive and one voting bloc cares about the continued survival of (trans)human civilization. This might require giving up an aspiration to expected utility maximization.