I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.
As a real world example I would like to use the post 'Epistle to the New York Less Wrongians' by Eliezer Yudkowsky and his visit to New York.
How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?
It seems that the first thing he would have to do is to figure out what he really wants, his preferences1, right? The next step would be to formalize his preferences by describing it as a utility function and assign a certain number of utils2 to each member of the set, e.g. his own survival. This description would have to be precise enough to figure out what it would mean to maximize his utility function.
Now before he can continue he will first have to compute the expected utility of computing the expected utility of computing the expected utility of computing the expected utility3 ... and also compare it with alternative heuristics4.
He then has to figure out each and every possible action he might take, and study all of their logical implications, to learn about all possible world states he might achieve by those decisions, calculate the utility of each world state and the average utility of each action leading up to those various possible world states5.
To do so he has to figure out the probability of each world state. This further requires him to come up with a prior probability for each case and study all available data. For example, how likely it is to die in a plane crash, how long it would take to be cryonically suspended from where he is in case of a fatality, the crime rate and if aliens might abduct him (he might discount the last example, but then he would first have to figure out the right level of small probabilities that are considered too unlikely to be relevant for judgment and decision making).
I probably miss some technical details and got others wrong. But this shouldn't detract too much from my general request. Could you please explain how Less Wrong style rationality is to be applied practically? I would also be happy if you could point out some worked examples or suggest relevant literature. Thank you.
I also want to note that I am not the only one who doesn't know how to actually apply what is being discussed on Less Wrong in practice. From the comments:
You can’t believe in the implied invisible and remain even remotely sane. [...] (it) doesn’t just break down in some esoteric scenarios, but is utterly unworkable in the most basic situation. You can’t calculate shit, to put it bluntly.
None of these ideas are even remotely usable. The best you can do is to rely on fundamentally different methods and pretend they are really “approximations”. It’s complete handwaving.
Using high-level, explicit, reflective cognition is mostly useless, beyond the skill level of a decent programmer, physicist, or heck, someone who reads Cracked.
I can't help but agree.
P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.
1. What are "preferences" and how do you figure out what long-term goals are stable enough under real world influence to allow you to make time-consistent decisions?
2. How is utility grounded and how can it be consistently assigned to reflect your true preferences without having to rely on your intuition, i.e. pull a number out of thin air? Also, will the definition of utility keep changing as we make more observations? And how do you account for that possibility?
3. Where and how do you draw the line?
4. How do you account for model uncertainty?
5. Any finite list of actions maximizes infinitely many different quantities. So, how does utility become well-defined?
We would need to identify the sort of things that can go wrong. For example, I can identify two types of philosophic horror at the world (there might be more). One is where the world seems to have become objectively horrifying, and you can't escape from this perception, or don't want to escape from it because you believe this would require the sacrifice of your reason, values, or personality. A complementary type is where you believe the world could become infinitely better, if only everyone did X, but you're the only one who wants to do X, no-one else will support you, and in fact they try to talk you out of your ideas.
Example of the first: I know someone who believes in Many Worlds and is about to kill himself unless he can prove to himself that the worlds are "diverging" (in the jargon of Alastair Wilson) rather than "splitting". "Diverging worlds" are each self-contained, like in a single-world theory, but they can track each other for a time (i.e. the history of one will match the history of the other up to a point). "Splitting worlds" are self-explanatory - worlds that start as one and branch into many. What's so bad about the splitting worlds, he says, is that the people in this world, that you know and care about, are the ones who experience all possible outcomes, who get murdered by you in branches where you spontaneously become a killer (and add every bad thing you can think of, and can't, to the list of what happens to them). Also, distinct from this, human existence is somehow rendered meaningless because everything always happens. (I think the meaninglessness has to do with the inability to make a difference or produce outcomes, and not just the inconceivability of all possibilities being real.) In the self-contained "diverging worlds", the people you know just have one fate - their copies in the other worlds are different people - and you're saved from the horror and nihilism of the branching worlds.
Example of the second: recent LW visitor "Singularity_Utopia", who on the one hand says that an infinite perfect future of immortality and superintelligence is coming as soon as 2045, and we don't even need to work on friendliness, just focus on increasing intelligence, and that meanwhile the world could start becoming better right now if everyone embraced the knowledge of imminent "post-scarcity"... but who at the same time says on his website that his life is a living hell. I think that without a doubt this is someone whose suffering is intimately linked with the fact that they have a message of universal joy that no-one is listening to.
Now if someone proposes to be a freelance philosophical Hippocrates, they have their work cut out for them. The "victims" of these mental states tend to be very intelligent and strong-willed. Example number one thinks you could only be a psychopath to want to live in that sort of universe, so he doesn't want to solve his problem by changing his attitude towards splitting worlds; the only positive solution would be to discover that this ontology is objectively unlikely. Example number two is trying to save the world by living his life this way, so I suppose it seems supremely important to keep it up. He might be even less likely to change his ways.
It took me 3 months to realize that I completely failed to inquire about your second friend. I must have seen him as having the lesser problem and dismissed it out of hand, without realizing that acknowledging the perceived ease of a problem isn't the same as actually solving it, like putting off easy homework.
How is your second friend turning out?