Here is the problem. If I use expected utility maximization (EU) on big and unintuitive problems like existential risks and to decide what I should do about it; If I use EU to decide how to organize my life by and large; If I use EU to decide to pursue a terminal goal but then stop using it to decide what goals are instrumental in achieving the desired outcome, then how does it help to use EU at all? And otherwise, how do I decide where to draw the line?
You can't be perfect but that doesn't mean that you can't do better. It also doesn't mean that you can do better. Maybe thinking about all this rationality business is pretty useless after all. But complaining that you can't perfectly apply expected utility is not a good argument for that.
People closely associated with SIAI/LW do use EU in support of their overall goals, yet ignore EU when it comes to flying to NY or writing a book about rationality:
They don't use EU in the sense of coming up with a big complicated model, plugging probabilities into it and then concluding "gee, option A has 13.743% larger expected utility than option B; A it is." I think they reasoned qualitatively and arrived at the conclusion that some subset of actions has much greater potential impact than others. You don't have to do precise calculations when comparing a mountain with a pebble. The references to expected utility made in those quotes don't read to me like claims that all the beliefs were arrived at using formal mathematical methods but rather a method to remind people of the counterintuitive fact that the magnitudes of outcomes should affect your decision.
It's unreasonable to say that unless you are a perfect reasoner yourself, you should never talk about the theoretical principles underlying perfect reasoning even when faced with simple situations where those principles can be applied trivially. Again, it can be argued that the decision to direct effort at existential risk mitigation isn't as overdetermined as it is claimed and so you should make some calculations before talking about expected utility in that context but it can't be argued by pointing out that Yudkowsky doesn't calculate the expected utility of plane trips.
This was supposed to be a comment under XiXiDu's recent post but it got a bit unwieldy so I'm posting it top-level.
XiXiDu starts his post with:
I think the best single-sentence answer is: don't.
The usual way of making decisions is to come up with intuitive evaluations of various options and go with the one that feels most attractive. Sometimes you will feel (intuitively) that it would be good to spend some more time thinking about decision. So you'll put your initial intuitions into words (which are chosen by another intuitive black-box), come up with a causal models of your situation (generated by yet another intuitive module), then experience intuitive feelings about those thoughts, maybe come up with alternative thoughts and compare them (intuitively) or maybe turn those feelings-about-thoughts into second-order thoughts and continue the process until you run out of time, get bored (intuitively) or deliberatively decide that you've analyzed enough (by having run another progression of interweaving thoughts and intuitions in parallel to the first one).
In a sense, all thinking is intuition. You don't get to jump out of the system. There's no choice between using intuition and using some kind of completely different process called deliberative reasoning but rather a choice between using a small amount of object-level intuition vs lots of intuition turned upon itself.
That doesn't mean that we can't improve our thinking processes. Just that we do it by gaining knowledge and experience which then shape our intuitive thinking and not by somehow fundamentally altering their nature. An engineer and a composer both rely on intuition but it's the engineer that will succeed in building an internal combustion engine and the composer that will succeed in designing an aesthetically pleasing progression of sounds.
Mathematics is often pointed to as the foremost example of strict, logical thinking. Yet, mathematicians rely on intuition too. Mathematical proofs are considered trustworthy because the rules of proof formation are sufficiently simple that humans can train themselves to reliably distinguish proofs from non-proofs. A mathematician looks at a line in a proof and asks herself 'is that a correct application of logical inference rules?' She either spots a violation or gets a feeling that it's in fact correct. There's a very high chance she got it right but no mystical state of pure logic that guarantees it. And of course, while proofs have to obey formal rules, the only rule for how you're supposed to think when trying to come up with one is 'anything goes'.
So how do you use the principle of expected utility maximization to maximally achieve your goals?
Sometimes, in very specific circumstances, you can use it directly but that doesn't mean you turn into an idealized expected utility maximizer. You are applying your domain-specific skill of mathematics to a specific formalism. Which seems useful to you because earlier you used your domain-specific skill of seeing useful connections between reality and mathematical models.
Or you can ignore it completely and focus on more practical-sounding advice based on the long list of biases catalogued by science. For example, you can learn the rule 'If I want to believe that someone has some persistent trait based on a single observation, that's highly suspicious (fundamental attribution error). Doubly so if that belief would make me feel smug.' It seems that this has nothing to do with any idealized formalism. But to declare something a bias you need some standard against which you can compare observed behavior. If people thought that it's pointless to come up with idealized models of correct belief formation or decision making because we can never completely avoid intuition, then they might not have bothered with researching cognitive biases. So in a way, expected utility maximization (or Bayesian induction) is a prerequisite idea to all those practically applicable results.
And in general, the more complete your knowledge of a body of ideas, the better you can apply them in real life. So knowing the general principle that binds the more practically-oriented facts together can be helpful in ways that depends on the specific way you look at the world and think about things. This is, once again, the skill of seeing useful connections between mathematical models and reality. If you happen to identify a specific way in which your actions deviate from the model of expected utility maximization, fix it. If you don't, there's no point in worrying that you're not doing it right just because you can't account for all that goes on in your head in formal terms.