I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.
As a real world example I would like to use the post 'Epistle to the New York Less Wrongians' by Eliezer Yudkowsky and his visit to New York.
How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?
It seems that the first thing he would have to do is to figure out what he really wants, his preferences1, right? The next step would be to formalize his preferences by describing it as a utility function and assign a certain number of utils2 to each member of the set, e.g. his own survival. This description would have to be precise enough to figure out what it would mean to maximize his utility function.
Now before he can continue he will first have to compute the expected utility of computing the expected utility of computing the expected utility of computing the expected utility3 ... and also compare it with alternative heuristics4.
He then has to figure out each and every possible action he might take, and study all of their logical implications, to learn about all possible world states he might achieve by those decisions, calculate the utility of each world state and the average utility of each action leading up to those various possible world states5.
To do so he has to figure out the probability of each world state. This further requires him to come up with a prior probability for each case and study all available data. For example, how likely it is to die in a plane crash, how long it would take to be cryonically suspended from where he is in case of a fatality, the crime rate and if aliens might abduct him (he might discount the last example, but then he would first have to figure out the right level of small probabilities that are considered too unlikely to be relevant for judgment and decision making).
I probably miss some technical details and got others wrong. But this shouldn't detract too much from my general request. Could you please explain how Less Wrong style rationality is to be applied practically? I would also be happy if you could point out some worked examples or suggest relevant literature. Thank you.
I also want to note that I am not the only one who doesn't know how to actually apply what is being discussed on Less Wrong in practice. From the comments:
You can’t believe in the implied invisible and remain even remotely sane. [...] (it) doesn’t just break down in some esoteric scenarios, but is utterly unworkable in the most basic situation. You can’t calculate shit, to put it bluntly.
None of these ideas are even remotely usable. The best you can do is to rely on fundamentally different methods and pretend they are really “approximations”. It’s complete handwaving.
Using high-level, explicit, reflective cognition is mostly useless, beyond the skill level of a decent programmer, physicist, or heck, someone who reads Cracked.
I can't help but agree.
P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.
1. What are "preferences" and how do you figure out what long-term goals are stable enough under real world influence to allow you to make time-consistent decisions?
2. How is utility grounded and how can it be consistently assigned to reflect your true preferences without having to rely on your intuition, i.e. pull a number out of thin air? Also, will the definition of utility keep changing as we make more observations? And how do you account for that possibility?
3. Where and how do you draw the line?
4. How do you account for model uncertainty?
5. Any finite list of actions maximizes infinitely many different quantities. So, how does utility become well-defined?
This seems to be conflating rationality centered material with FAI/optimal decision theory material and has lumped them all under the heading "utilit maximization". These individual parts are fundamentally distinct, and aim at different things.
Rationality centered material does include some thought about utility, Fermi calculations and heuristics, but focuses on debiasing, recognizing cognitive heuristics that can get in the way (such as rationalization, cached thoughts) and the like. I've managed to apply them a bit in my day to day thought. For instance; recognizing the fundamental attribution error has been very useful to me, because I tend to be judgmental. This has in the past led to me isolating myself much more than I should, and sinking into misanthropy. For the longest time I avoided the thoughts, now I've found that I can treat them in a more clinical manner and have gained some perspective on them. This helps me raise my overall utility, but it does not perfectly optimize it by any stretch of the imagination - nor is it meant to, it just makes things better.
Bottomless recursion with respect to expected utility calculations is a decision theory/rational choice theory issue and an AI issue, but it is not a rationality issue. To be more rational, we don't have to optimize, we just have to recognize that one feasible procedure is better than another, and work on replacing our current procedure with this new, better one. If we recognize that a procedure is impossible for us to use in practice, we don't use it - but it might be useful to talk about in a different, theoretical context such as FAI or decision theory. TDT and UDT were not made for practical use by humans - they were made to address a theoretical problem in FAI and formal decision theory, even though some people claim to have made good use of them (even here we see TDT being used at a psychological aide for overcoming hyperbolic discounting more than as a formal tool of any sort).
Also, there are different levels of analysis appropriate for different sorts of things. If I'm analyzing the likelihood of an asteroid impact over some timescale, I'm going to include much more explicit detail there, than in my analysis of whether I should go hang out with LWers in New York for a bit. I might assess lots of probability measures in a paper analyzing a topic, but doing so on the fly rarely crosses my mind (I often do a quick and dirty utility calculation to decide whether or not to do something, e.g. - which road home has the most right turns, what's the expected number of red lights given the time of day etc., but that's it).
Overall, I'm getting the impression that all of these things are being lumped in together when they should not be, and utility maximization means very distinct things in these very distinct contexts, most technical aspects of utility maximization were not intended for explicit everyday use by humans, they were intended for use by specialists in certain contexts.