This was supposed to be a comment under XiXiDu's recent post but it got a bit unwieldy so I'm posting it top-level.

XiXiDu starts his post with:

I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.

I think the best single-sentence answer is: don't.

The usual way of making decisions is to come up with intuitive evaluations of various options and go with the one that feels most attractive. Sometimes you will feel (intuitively) that it would be good to spend some more time thinking about decision. So you'll put your initial intuitions into words (which are chosen by another intuitive black-box), come up with a causal models of your situation (generated by yet another intuitive module), then experience intuitive feelings about those thoughts, maybe come up with alternative thoughts and compare them (intuitively) or maybe turn those feelings-about-thoughts into second-order thoughts and continue the process until you run out of time, get bored (intuitively) or deliberatively decide that you've analyzed enough (by having run another progression of interweaving thoughts and intuitions in parallel to the first one).

In a sense, all thinking is intuition. You don't get to jump out of the system. There's no choice between using intuition and using some kind of completely different process called deliberative reasoning but rather a choice between using a small amount of object-level intuition vs lots of intuition turned upon itself.

That doesn't mean that we can't improve our thinking processes. Just that we do it by gaining knowledge and experience which then shape our intuitive thinking and not by somehow fundamentally altering their nature. An engineer and a composer both rely on intuition but it's the engineer that will succeed in building an internal combustion engine and the composer that will succeed in designing an aesthetically pleasing progression of sounds.

Mathematics is often pointed to as the foremost example of strict, logical thinking. Yet, mathematicians rely on intuition too. Mathematical proofs are considered trustworthy because the rules of proof formation are sufficiently simple that humans can train themselves to reliably distinguish proofs from non-proofs. A mathematician looks at a line in a proof and asks herself 'is that a correct application of logical inference rules?' She either spots a violation or gets a feeling that it's in fact correct. There's a very high chance she got it right but no mystical state of pure logic that guarantees it. And of course, while proofs have to obey formal rules, the only rule for how you're supposed to think when trying to come up with one is 'anything goes'.

So how do you use the principle of expected utility maximization to maximally achieve your goals? 

Sometimes, in very specific circumstances, you can use it directly but that doesn't mean you turn into an idealized expected utility maximizer. You are applying your domain-specific skill of mathematics to a specific formalism. Which seems useful to you because earlier you used your domain-specific skill of seeing useful connections between reality and mathematical models.

Or you can ignore it completely and focus on more practical-sounding advice based on the long list of biases catalogued by science. For example, you can learn the rule 'If I want to believe that someone has some persistent trait based on a single observation, that's highly suspicious (fundamental attribution error). Doubly so if that belief would make me feel smug.' It seems that this has nothing to do with any idealized formalism. But to declare something a bias you need some standard against which you can compare observed behavior. If people thought that it's pointless to come up with idealized models of correct belief formation or decision making because we can never completely avoid intuition, then they might not have bothered with researching cognitive biases. So in a way, expected utility maximization (or Bayesian induction) is a prerequisite idea to all those practically applicable results.

And in general, the more complete your knowledge of a body of ideas, the better you can apply them in real life. So knowing the general principle that binds the more practically-oriented facts together can be helpful in ways that depends on the specific way you look at the world and think about things. This is, once again, the skill of seeing useful connections between mathematical models and reality. If you happen to identify a specific way in which your actions deviate from the model of expected utility maximization, fix it. If you don't, there's no point in worrying that you're not doing it right just because you can't account for all that goes on in your head in formal terms.

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 5:28 AM

A long time ago I took a course in decision analysis. It was a very practical course aimed at engineers, and didn't really go into any of the theoretical aspects we are fond of talking about on LW. I find that when I get lost thinking about LW-style rationality I can always find my way again by remembering the basics I learned in this course. Critically, for the purposes of the course, a decision was defined as having discrete options.

Often the options were totally binary, as in "fund this project or do not fund this project," or "include this design feature or do not include this feature." Sometimes there were multiple possible courses of action, but the possible actions were always defined, and the act of the decision itself had to be clearly defined.

The question, "What should I do this week?" is not an appropriately formed question according to the framework I was trained to use. The question, "Should I go to New York this week, or not?" is well formed, and you can begin to think about this question.

No, turning it into a binary question obviously does not account for all the other infinite options you really have, and so it isn't Utility Maximization, it's just Making the Better Choice This Particular Time. And really, you don't have infinite options, just as an oil company can't actually decide it should open up a McDonald's instead of drilling a particular well. A huge range of options are so high-cost, low-probability or high-uncertainty as to be wisely neglected.

So, in your daily life, don't use "Utility Maximization," but do try to use basic decision theory. Like any tool, it improves your life within its range of practical applicability. Just don't expect magic.

I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.

I think the best single-sentence answer is: don't.

(Quick comment before I go offline for today.)

Here is the problem. If I use expected utility maximization (EU) on big and unintuitive problems like existential risks and to decide what I should do about it; If I use EU to decide how to organize my life by and large; If I use EU to decide to pursue a terminal goal but then stop using it to decide what goals are instrumental in achieving the desired outcome, then how does it help to use EU at all? And otherwise, how do I decide where to draw the line?

People closely associated with SIAI/LW do use EU in support of their overall goals, yet ignore EU when it comes to flying to NY or writing a book about rationality:

[S]uppose you have a moral view that counts future people as being worth as much as present people. You might say that fundamentally it doesn't matter whether someone exists at the current time or at some future time, just as many people think that from a fundamental moral point of view, it doesn't matter where somebody is spatially---somebody isn't automatically worth less because you move them to the moon or to Africa or something. A human life is a human life. If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do.

-- Nick Bostrom

If you want to maximize your marginal expected utility you have to maximize on your choice of problem over the combination of high impact, high variance, possible points of leverage, and few other people working on it. The problem of stable goal systems in self-improving Artificial Intelligence has no realistic competitors under any three of these criteria, let alone all four.

-- Eliezer Yudkowsky

In terms of expected utility maximization, even large probabilities of jumping the interval between a universe-history in which 95% of existing biological species survive Earth’s 21st century, versus a universe-history where 80% of species survive, are just about impossible to trade off against tiny probabilities of jumping the interval between interesting universe-histories, versus boring ones where intelligent life goes extinct, or the wrong sort of AI self-improves....with millions of people working on environmentalism, and major existential risks that are completely ignored… if you add a marginal resource that can, rarely, be steered by expected utilities instead of warm glows, devoting that resource to environmentalism does not make sense.

-- Eliezer Yudkowsky

[-][anonymous]12y10

Here is the problem. If I use expected utility maximization (EU) on big and unintuitive problems like existential risks and to decide what I should do about it; If I use EU to decide how to organize my life by and large; If I use EU to decide to pursue a terminal goal but then stop using it to decide what goals are instrumental in achieving the desired outcome, then how does it help to use EU at all? And otherwise, how do I decide where to draw the line?

You can't be perfect but that doesn't mean that you can't do better. It also doesn't mean that you can do better. Maybe thinking about all this rationality business is pretty useless after all. But complaining that you can't perfectly apply expected utility is not a good argument for that.

People closely associated with SIAI/LW do use EU in support of their overall goals, yet ignore EU when it comes to flying to NY or writing a book about rationality:

They don't use EU in the sense of coming up with a big complicated model, plugging probabilities into it and then concluding "gee, option A has 13.743% larger expected utility than option B; A it is." I think they reasoned qualitatively and arrived at the conclusion that some subset of actions has much greater potential impact than others. You don't have to do precise calculations when comparing a mountain with a pebble. The references to expected utility made in those quotes don't read to me like claims that all the beliefs were arrived at using formal mathematical methods but rather a method to remind people of the counterintuitive fact that the magnitudes of outcomes should affect your decision.

It's unreasonable to say that unless you are a perfect reasoner yourself, you should never talk about the theoretical principles underlying perfect reasoning even when faced with simple situations where those principles can be applied trivially. Again, it can be argued that the decision to direct effort at existential risk mitigation isn't as overdetermined as it is claimed and so you should make some calculations before talking about expected utility in that context but it can't be argued by pointing out that Yudkowsky doesn't calculate the expected utility of plane trips.

TBH i don't see how EU is being used with regards to the friendly AI.

The arguments are so much based on pure guessing, that their external probabilities are very low, and the differences in the utilities really could be so low that someone could conceivably say 'I wouldn't give up $1 of mine to provide $1 million for an attempt to mitigate risk of UFAI, even if you argue that UFAI tortures every possible human mind-state'. [note: i presume literal $, not resources, so the global utility of creation of 1 million $ is zero]

The only way EU comes into play is the appeal to the purely intuitive feeling we get, that the efficacy of the FAI effort can't possibly be so low as to degrade such giant utility to the trivial level of "should i chew gum or not", or even unimaginably less than that. Unfortunately, though, it can. The AI design space is multi-dimensional and very huge. The intuitive feeling may be correct, or may be entirely wrong. There's a lot of fallacies - being graded for effort in education contributes to one, the just world fallacy contributes to another - which may throw the intuitive feeling way off.

If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do.

By the same logic, birth control (of any kind, including simply not having sex) is like a murder -- removing an individual from the next generation is like removing an individual from this generation, right? If you know that your children would on average have lives worth living, and yet you refuse to reproduce as much as possible, you are a very bad person!

Or maybe there is a difference between killing an individual that exists, and not creating another hypothetical individual. In this sense, existential risk is bad because it kills all individuals existing at the time of the disaster, but the following hypothetical generations are irrelevant.

I am not sure what exactly is my position on this topic -- I feel that not having as many children as possible is not a crime, but the extinction of humanity (by any means, including all existing people deciding to abstain from reproduction) would be a huge loss. And I am not sure where to draw the line, also because I cannot estimate effects of e.g. doubling or halving the planet population. It probably depends on many other things, for example more people could do more science and improve their lives, but also could fight for scarcer resources, making their lives worse, and this fight and poverty could even prevent the science.

Perhaps in some sense, not having as many children as possible today is like a murder, but if it allows higher living standards, less wars, more science, etc., then it is just a sacrifice of the few for the benefit of many in the post-Singularity future, so... shut up and multiply (not biologically, heh), but this seems like a very dangerous line of thought.

I lean towards maybe having a parliamentary model of my preferences (that's the term Bostrom uses, but I'm not sure I'd use his decision theory, exactly) in which one voting bloc cares about the people who are still alive and one voting bloc cares about the continued survival of (trans)human civilization. This might require giving up an aspiration to expected utility maximization.

This was supposed to be a comment under XiXiDu's recent post but it got a bit unwieldy so I'm posting it top-level.

This is a good way to write posts. Also, it's an excellent post.

If you have some initial luck, then recursively applying your intuition on your intuition could lead you to better approximations of the hypothetical expected utility. But are we lucky enough?

There are two sources of hope: First, evolution would filter out the obviously unlucky species -- those who by applying their intuition on their intuition would bring themselves to a quick failure (but there still remains a danger of short success now, leading to a greater damage later). Second, the universe is simple enough, at least on some levels, so after some treshold of complexity it may be actually easier to create a process that could get it right, than a process that could mostly get it right and then face some impassable barrier (other than simply not having enough resources). For example there was no evolutionary pressure to allow us calculate quadratic equations, but we can do it, because with a mind capable of general reasoning and symbolic manipulation and some basic math, you probably get the ability to solve quadratic equations as a free bonus; you just have to discover them. So as far as our intuition applied on our intuition can approximate reality with some success, there is a hope that this success can be increased by applying our intuitions further.

How is this style of thinking different than how analytic philosophers operate and get criticized (quite harshly) on Less Wrong for?

A mathematician looks at a line in a proof and asks herself 'is that a correct application of logical inference rules?' She either spots a violation or gets a feeling that it's in fact correct. There's a very high chance she got it right but no mystical state of pure logic that guarantees it.

In this scene, does an automated proof verifier count as a mathematician? I'd agree that there's still neither a 100% chance of correctness nor a mystical state of pure logic at work here, but calling the process "intuition" still strikes me as very misleading.

[-][anonymous]12y00

But people don't work al all like automated proof verifiers. You couldn't feed most mathematical proofs to an automated proof verifier. And even when people fully formalize their proofs, they still rely on a learned and slightly fuzzy ability to assess the correctness of any single step. You can't run a proof-checking procedure on your brain. (You can emulate running that procedure but then you have to rely on your learned, slightly fuzzy ability to emulate any single step of the computation.)