I would like to ask for help on how to use expected utility maximization, in practice, to maximally achieve my goals.

As a real world example I would like to use the post 'Epistle to the New York Less Wrongians' by Eliezer Yudkowsky and his visit to New York.

How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?

It seems that the first thing he would have to do is to figure out what he really wants, his preferences1, right? The next step would be to formalize his preferences by describing it as a utility function and assign a certain number of utils2 to each member of the set, e.g. his own survival. This description would have to be precise enough to figure out what it would mean to maximize his utility function.

Now before he can continue he will first have to compute the expected utility of computing the expected utility of computing the expected utility of computing the expected utility3 ... and also compare it with alternative heuristics4.

He then has to figure out each and every possible action he might take, and study all of their logical implications, to learn about all possible world states he might achieve by those decisions, calculate the utility of each world state and the average utility of each action leading up to those various possible world states5.

To do so he has to figure out the probability of each world state. This further requires him to come up with a prior probability for each case and study all available data. For example, how likely it is to die in a plane crash, how long it would take to be cryonically suspended from where he is in case of a fatality, the crime rate and if aliens might abduct him (he might discount the last example, but then he would first have to figure out the right level of small probabilities that are considered too unlikely to be relevant for judgment and decision making).

I probably miss some technical details and got others wrong. But this shouldn't detract too much from my general request. Could you please explain how Less Wrong style rationality is to be applied practically? I would also be happy if you could point out some worked examples or suggest relevant literature. Thank you.

I also want to note that I am not the only one who doesn't know how to actually apply what is being discussed on Less Wrong in practice. From the comments:

You can’t believe in the implied invisible and remain even remotely sane. [...] (it) doesn’t just break down in some esoteric scenarios, but is utterly unworkable in the most basic situation. You can’t calculate shit, to put it bluntly.

None of these ideas are even remotely usable. The best you can do is to rely on fundamentally different methods and pretend they are really “approximations”. It’s complete handwaving.

Using high-level, explicit, reflective cognition is mostly useless, beyond the skill level of a decent programmer, physicist, or heck, someone who reads Cracked.

I can't help but agree.

P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.

 

1. What are "preferences" and how do you figure out what long-term goals are stable enough under real world influence to allow you to make time-consistent decisions?

2. How is utility grounded and how can it be consistently assigned to reflect your true preferences without having to rely on your intuition, i.e. pull a number out of thin air? Also, will the definition of utility keep changing as we make more observations? And how do you account for that possibility?

3. Where and how do you draw the line?

4. How do you account for model uncertainty?

5. Any finite list of actions maximizes infinitely many different quantities. So, how does utility become well-defined?

New Comment
48 comments, sorted by Click to highlight new comments since:

P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.

Is there something wrong with me when I see writing like that and it fills me with nostalgia for days of yore when I had more philosophical crises happening closer together? I have this weird sense that there's an opportunity for some kind of "It Gets Better thing" for young philosophers (except, of course, there's so few of them that stochastic noise and inability to reach the audience would make such a media campaign pointless: an inter-subjectively opaque discourse to no one).

So far it does seem to get better. I haven't had a good solid philosophic crisis in something like five years and I almost miss them now. Life was more exciting back then. When I have ideas that seem like they could precipitate that way now, it mostly just leaves me with a sense that I've acquired an interesting new insight that is pretty neat but increases the amount of inferential distance I have to keep track of when talking to other people.

One important thing I've found is finding conversational partners who are willing to listen to your abstract digressions and then contribute useful insights. If you're doing everything all by yourself there is a sense in which you are like "a feral child" and you should probably try to seek out others and learn to talk with them about what's going on in your respective souls. Whiteboards help. Internet-mediated-text doesn't help nearly as much as conversation in my experience. Dialogue is a different and probably better process and the low latency and high "monkey bandwidth" are important and helpful.

Seek friends. Really. Seek friends.

I have this weird sense that there's an opportunity for some kind of "It Gets Better thing" for young philosophers

We would need to identify the sort of things that can go wrong. For example, I can identify two types of philosophic horror at the world (there might be more). One is where the world seems to have become objectively horrifying, and you can't escape from this perception, or don't want to escape from it because you believe this would require the sacrifice of your reason, values, or personality. A complementary type is where you believe the world could become infinitely better, if only everyone did X, but you're the only one who wants to do X, no-one else will support you, and in fact they try to talk you out of your ideas.

Example of the first: I know someone who believes in Many Worlds and is about to kill himself unless he can prove to himself that the worlds are "diverging" (in the jargon of Alastair Wilson) rather than "splitting". "Diverging worlds" are each self-contained, like in a single-world theory, but they can track each other for a time (i.e. the history of one will match the history of the other up to a point). "Splitting worlds" are self-explanatory - worlds that start as one and branch into many. What's so bad about the splitting worlds, he says, is that the people in this world, that you know and care about, are the ones who experience all possible outcomes, who get murdered by you in branches where you spontaneously become a killer (and add every bad thing you can think of, and can't, to the list of what happens to them). Also, distinct from this, human existence is somehow rendered meaningless because everything always happens. (I think the meaninglessness has to do with the inability to make a difference or produce outcomes, and not just the inconceivability of all possibilities being real.) In the self-contained "diverging worlds", the people you know just have one fate - their copies in the other worlds are different people - and you're saved from the horror and nihilism of the branching worlds.

Example of the second: recent LW visitor "Singularity_Utopia", who on the one hand says that an infinite perfect future of immortality and superintelligence is coming as soon as 2045, and we don't even need to work on friendliness, just focus on increasing intelligence, and that meanwhile the world could start becoming better right now if everyone embraced the knowledge of imminent "post-scarcity"... but who at the same time says on his website that his life is a living hell. I think that without a doubt this is someone whose suffering is intimately linked with the fact that they have a message of universal joy that no-one is listening to.

Now if someone proposes to be a freelance philosophical Hippocrates, they have their work cut out for them. The "victims" of these mental states tend to be very intelligent and strong-willed. Example number one thinks you could only be a psychopath to want to live in that sort of universe, so he doesn't want to solve his problem by changing his attitude towards splitting worlds; the only positive solution would be to discover that this ontology is objectively unlikely. Example number two is trying to save the world by living his life this way, so I suppose it seems supremely important to keep it up. He might be even less likely to change his ways.

How did your first friend turn out?

He's still alive, but medicated and still miserable; by his account, only able to think for a few hours each day. MWI is his personal basilisk. For a while last year, he was excited when the Nobelist Gerard 't Hooft was proposing to get quantum field theory from cellular automata, but that was only for very special QFTs, and no-one else has built on those papers so far. Right now he's down because everyone he asks thinks David Wallace (Oxford exponent of MWI) is brilliant. I originally heard from him because of my skepticism about MWI, expressed many times on this site.

Is he still on Less Wrong?

Not really (though I told him about this thread). He spends his time corresponding directly with physicists and philosophers.

Any way for me to contact him?

(Taken to PM.)

Hang on, didn't Everett believe that in the event of death, his consciousness would just follow a stream of events that lead to his not being dead?

[-]Shmi-10

Maybe consider introducing him to instrumentalism. Worrying to death about untestables is kind of sad.

It took me 3 months to realize that I completely failed to inquire about your second friend. I must have seen him as having the lesser problem and dismissed it out of hand, without realizing that acknowledging the perceived ease of a problem isn't the same as actually solving it, like putting off easy homework.

How is your second friend turning out?

He isn't my friend, he's just some guy who decided to be a singularity cheerleader. But his website is still the same - super-AI is inherently good and can't come soon enough, scarcity is the cause of most problems and abundance is coming and will fix it, life in the pre-singularity world is tragic and boring and bearable only because the future will be infinitely better.

So far it does seem to get better. I haven't had a good solid philosophic crisis in something like five years and I almost miss them now. Life was more exciting back then. When I have ideas that seem like they could precipitate that way now, it mostly just leaves me with a sense that I've acquired an interesting new insight that is pretty neat but increases the amount of inferential distance I have to keep track of when talking to other people.

I wonder how much of this is due to acquiring a memetic immune system or otherwise simply learning how to compartmentalize.

So far as I can tell, my resilience in this way is not an acquired defect but rather than an acquired sophistication.

When my working philosophic assumptions crashed in the past, I learned a number of ways to handle it. For one example, I've seen that when something surprises me, for the most part it all adds up to normality and crazy new ways of looking at the world it are generally not important in normal circumstances for daily human life. I still have to get dressed every morning and eat food like a mortal, but now I have a new tool to apply in special cases or leverage in contexts where I can control many parameters and apply more of an engineering mindset and get better outcomes. For a specific example, variations on egoism put me in a state of profound aporeia for about 3 months in high school, but eventually I worked out enough of a model of motivational psychology with enough moving parts that I could reconcile what I actually saw of people's pursuit of things they "wanted" and translate naive people's emission of words like "values" and "selfish" and "moral" and so on in ways that made sense, even if it sometimes demonstrated philosophic confusions similar to wish fulfillment fantasies.

It helps, perhaps, that my parents didn't force some crazy literalistic theism down my throat but rather tended to do things like tell me that I should keep an open mind and never stop asking "why?" the way most people do for some reason. Its not like I suddenly starting taking the verbal/theoretical content of my brain seriously in an act of parental defiance and accidentally took up adulterer stoning because that had been laying around in my head in an unexamined way. I was never encouraged to stone adulterers. I was raised on a farm in the redwoods by parents without college degrees and sent off to academia naively thinking it worked the way that it does in stories about Science And Progress. If I have such confusions remaining, my guess is that I take epistemology too seriously and imagine that other people might be helped by being better at it :-P

Eliezer's quoting of Feynman in the compartmentalization link seems naive to me, but it's a naivete that I shared when I was 19. His text there might have appealed to me then because it whispers to the the part of my soul that wants to just work on an interesting puzzle and get the right answer and apply it to the world and have a good life doing that. The same part of my soul and says that anything which might require compromises during a political competition for research resources isn't actually about a political competition for resources but is instead just other people "being dumb". Its nicer to think of yourself as having a scientific insight rather than an ignorance of the pragmatics of political economy. Science is fun and morally praiseworthy and a lot of people are interested in doing it. But where there's muck, there's brass so it is tricky to figure out a way to be entirely devoted to that and get paid at the same time.

It helps, perhaps, that my parents didn't force some crazy literalistic theism down my throat but rather tended to do things like tell me that I should keep an open mind and never stop asking "why?" the way most people do for some reason. Its not like I suddenly starting taking the verbal/theoretical content of my brain seriously in an act of parental defiance and accidentally took up adulterer stoning because that had been laying around in my head in an unexamined way. I was never encouraged to stone adulterers. I was raised on a farm in the redwoods by parents without college degrees and sent off to academia naively thinking it worked the way that it does in stories about Science And Progress. If I have such confusions remaining, my guess is that I take epistemology too seriously and imagine that other people might be helped by being better at it :-P

The stoning adulterers part is an extreme hypothetical example of taking a Christian meme to its logical conclusion. As PhilGoetz mentioned in the post, secular memes can also have this problem. The same even applies to some of the 'rationalist' memes around here.

One important thing I've found is finding conversational partners who are willing to listen to your abstract digressions and then contribute useful insights. If you're doing everything all by yourself there is a sense in which you are like "a feral child" and you should probably try to seek out others and learn to talk with them about what's going on in your respective souls. Whiteboards help. Internet-mediated-text doesn't help nearly as much as conversation in my experience. Dialogue is a different and probably better process and the low latency and high "monkey bandwidth" are important and helpful.

Any sort of feedback seems able to break loops like these crises. It's kind of odd. I've wondered if there's a concrete empirical explanation related to neural networks and priming - the looping renders you literally unable to think of any creative objections or insights.

I've always hesitated telling others about the problem for fear of spreading the memetic immunity disorder to somebody else.

Expected utility maximization is a tool in your mental toolbox that helps clarify your thinking, not something that you'd try to carry out explicitly.

http://lesswrong.com/lw/sg/when_not_to_use_probabilities/ :

I don't always advocate that human beings, trying to solve their problems, should try to make up verbal probabilities, and then apply the laws of probability theory or decision theory to whatever number they just made up, and then use the result as their final belief or decision.

The laws of probability are laws, not suggestions, but often the true Law is too difficult for us humans to compute. If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute - even though the probabilities would be quite well-defined, if we could afford to calculate them.

So sometimes you don't apply probability theory. Especially if you're human, and your brain has evolved with all sorts of useful algorithms for uncertain reasoning, that don't involve verbal probability assignments.

Not sure where a flying ball will land? I don't advise trying to formulate a probability distribution over its landing spots, performing deliberate Bayesian updates on your glances at the ball, and calculating the expected utility of all possible strings of motor instructions to your muscles.

Trying to catch a flying ball, you're probably better off with your brain's built-in mechanisms, then using deliberative verbal reasoning to invent or manipulate probabilities.

Our brains already do expected utility maximization, or something approximating it, automatically and subconsciously. There's no need to try - or use - in trying to override those calculations with explicit reasoning if it's not necessary.

So when do we use the principle of expected utility? Mostly, when dealing with abstract issues that our brains haven't evolved to deal with. Investing, deciding whether to buy insurance, donating to charity, knowing not to play the lottery, that sort of thing. It also lends itself to some useful heuristics: for instance, Bryan Caplan points out that

The truth about essay contests is that the number of submissions is usually absurdly low considering the size of the prizes and the opportunity cost of students' time.

And that's really an expected utility calculation: your chances of winning in an essay contest might be pretty low, but so is your cost of attending, and the prizes are large enough to make the expected utility positive. That kind of thing.

Our brains already do expected utility maximization, or something approximating it, automatically and subconsciously.

Or something that expected utility maximization is an approximation of.

So when do we use the principle of expected utility? Mostly, when dealing with abstract issues that our brains haven't evolved to deal with.

Right, but of what use is it if we still rely on our intuitions to come up with a prior probability and a numerical utility assignment?

...your chances of winning in an essay contest might be pretty low, but so is your cost of attending, and the prizes are large enough to make the expected utility positive.

Why wouldn't I just assign any utility to get the desired result? If you can't ground utility in something that is physically measurable, then of what use is it other than giving your beliefs and decisions a veneer of respectability?

Right, but of what use is it if we still rely on our intuitions to come up with a prior probability and a numerical utility assignment?

Just because our brains haven't evolved to deal with a specific circumstance doesn't mean that all of our intuitions would be worthless in that circumstance. Me trying to decide what to invest in doesn't mean that my brain's claim of me currently sitting in a chair inside my home would suddenly become a worthless hallucination. Even if I'm investing, I can still trust the intuition that I'm at my home and sitting in a chair.

If we apply an intuition Y to situation X, then Y might always produce correct results for that X, or it might always produce wrong results for that X, or it might be somewhere in between. Sometimes we take an intuition that we know to be incorrect, and replace it with another decision-making procedure, such the principle of expected utility. If the intuitions which feed into that decision-making procedure are thought to be correct, then that's all that we need to do. Our intuitions may be incapable of producing exact numeric estimates, but they can still provide rough magnitudes.

Which intuitions are correct in which situations? When do we need to replace an intuition with learned rules or decision-making procedures? Well, that's what the heuristics and biases literature tries to find out.

Why wouldn't I just assign any utility to get the desired result? If you can't ground utility in something that is physically measurable, then of what use is it other than giving your beliefs and decisions a veneer of respectability?

What? You don't "assign a utility to get the desired result", you try to figure out what the desired result is. Of course, if you've already made a decision and want to rationalize it, then sure, you can do it by dressing it up in the language expected utility. But that doesn't change the fact that if you want to know whether you should participate, the principle of expected utility is the way to get the best result.

Here the quantities involved even let you make an explicit calculation, if you want to: you know what the prizes are, you know what you have to give up to participate, and you can find out how many people typically participate in such events. Though you can probably get close enough to the right result even without an explicit calculation.

Sometimes we take an intuition that we know to be incorrect, and replace it with another decision-making procedure, such the principle of expected utility. If the intuitions which feed into that decision-making procedure are thought to be correct, then that's all that we need to do.

1) What decision-making procedure do you use to replace intuition with another decision-making procedure?

2) What decision-making procedure is used to come up with numerical utility assignments and what evidence do you have that it is correct by a certain probability?

Our intuitions may be incapable of producing exact numeric estimates, but they can still provide rough magnitudes.

3) What method is used to convert those rough estimates provided by our intuition into numeric estimates?

3b) What evidence do you have converting intuitive judgements of the utility of world states into numeric estimates increases the probability of attaining what you really want?

What? You don't "assign a utility to get the desired result", you try to figure out what the desired result is.

An example would be FAI research. There is virtually no information to judge the expected utility of it. If you are in favor of it you can cite the positive utility associated with a galactic civilizations, if you are against it you can cite the negative utility associated with getting it wrong or making UFAI more likely by solving decision theory.

The desired outcome is found by calculating how much it satisfies your utility-function, e.g. how much utils you assign to an hour of awesome sex and how much negative utility you assign to an hour of horrible torture.

Humans do not have stable utility functions and can simply change the weighting of various factors and thereby the action that maximizes expected utility.

What evidence do you have that the whole business of expected utility maximization isn't just a perfect tool to rationalize biases?

(Note that I am not talking about the technically ideal case of a perfectly rational (whatever that means in this context) computationally unbounded agent.)

Here the quantities involved even let you make an explicit calculation, if you want to: you know what the prizes are, you know what you have to give up to participate, and you can find out how many people typically participate in such events. Though you can probably get close enough to the right result even without an explicit calculation.

Sure, but if attaining an event is dangerous because the crime rate in that area is very high due to recent riots, what prevents you from adjusting your utility function to attain anyway? In other words, what difference is there between just doing what you want based on naive introspection versus using expected utility calculations? If utility is completely subjective and arbitrary then it won't help you to evaluate different actions objectively. Winning is then just a label you can assign to any world state you like best at any given moment.

What would be irrational about playing the lottery all the day as long as I assign huge amounts of utility to money won by means of playing the lottery and therefore world states where I am rich by means of playing the lottery?

[-][anonymous]100

How did Eliezer Yudkowsky compute that it would maximize his expected utility to visit New York?

Why would anyone do that? (In the sense that your footnotes suggest this should be taken: quantifying over all possible worlds, trying to explicitly ground utility, and etc.)

We were human beings long before we started reading about rationality. I imagine EY looked at his schedule, his bank account, and the cost of a round-trip flight to New York, and said, "This might be cool, let's do it."

At the end of the day, everyone is still a human being. Everything adds up to normal, whether normality's perfectly optimized or not.

Yes, my model agrees with that. But then it would be more fair to speak about things like they really are. To say "I was thinking for two minutes, and it seemed cool and without obvious problems, so I decided to do it". You know, like an average mortal would do.

Speaking in a manner that suggests that decisions are done otherwise, seems to me just as dishonest as when a theist says "I heard Jesus speaking to me", when in reality it was something like "I got this idea, it was without obvious problems, and it seemed like it could raise my status in my religious community".

Not pretending to be something that I am not -- isn't this a part of the rationalist creed?

If people are optimizing their expected utility functions, I want to believe they are optimizing their expected utility functions. If people are choosing on a heuristic and rationalizing later, I want to believe they are choosing on a heuristic and rationalizing later. Let me not become attached to status in a rationalist community.

[-][anonymous]10

But then it would be more fair to speak about things like they really are.

I don't understand. Who is not speaking about things like they really are? EY doesn't even mention expected utility in his post. That was all a figment of someone's imagination.

If people are optimizing their expected utility functions, I want to believe they are optimizing their expected utility functions. If people are choosing on a heuristic and rationalizing later, I want to believe they are choosing on a heuristic and rationalizing later. Let me not become attached to status in a rationalist community.

No need to Gendlin. People aren't optimizing their utility functions, because they don't have conscious access to their utility functions.

This seems to be conflating rationality centered material with FAI/optimal decision theory material and has lumped them all under the heading "utilit maximization". These individual parts are fundamentally distinct, and aim at different things.

Rationality centered material does include some thought about utility, Fermi calculations and heuristics, but focuses on debiasing, recognizing cognitive heuristics that can get in the way (such as rationalization, cached thoughts) and the like. I've managed to apply them a bit in my day to day thought. For instance; recognizing the fundamental attribution error has been very useful to me, because I tend to be judgmental. This has in the past led to me isolating myself much more than I should, and sinking into misanthropy. For the longest time I avoided the thoughts, now I've found that I can treat them in a more clinical manner and have gained some perspective on them. This helps me raise my overall utility, but it does not perfectly optimize it by any stretch of the imagination - nor is it meant to, it just makes things better.

Bottomless recursion with respect to expected utility calculations is a decision theory/rational choice theory issue and an AI issue, but it is not a rationality issue. To be more rational, we don't have to optimize, we just have to recognize that one feasible procedure is better than another, and work on replacing our current procedure with this new, better one. If we recognize that a procedure is impossible for us to use in practice, we don't use it - but it might be useful to talk about in a different, theoretical context such as FAI or decision theory. TDT and UDT were not made for practical use by humans - they were made to address a theoretical problem in FAI and formal decision theory, even though some people claim to have made good use of them (even here we see TDT being used at a psychological aide for overcoming hyperbolic discounting more than as a formal tool of any sort).

Also, there are different levels of analysis appropriate for different sorts of things. If I'm analyzing the likelihood of an asteroid impact over some timescale, I'm going to include much more explicit detail there, than in my analysis of whether I should go hang out with LWers in New York for a bit. I might assess lots of probability measures in a paper analyzing a topic, but doing so on the fly rarely crosses my mind (I often do a quick and dirty utility calculation to decide whether or not to do something, e.g. - which road home has the most right turns, what's the expected number of red lights given the time of day etc., but that's it).

Overall, I'm getting the impression that all of these things are being lumped in together when they should not be, and utility maximization means very distinct things in these very distinct contexts, most technical aspects of utility maximization were not intended for explicit everyday use by humans, they were intended for use by specialists in certain contexts.

We're not trying to create our reasoning processes from scratch, just improve on the ones we have! I think that's a vital distinction.

I don't do an expected-utility calculation about what I'm going to wear today; it wouldn't be worth the mental energy, since my subconscious instincts on how formal I should look and what-matches-what seem to do just as well as my conscious reasoning would. But if I'm deciding something bigger- whether to spend a few hundred bucks on a new suit, for example- then it's worth it to think about quantitative factors (I can estimate how often I'd wear it, for instance), and try my best to weight these into the decision of how much it would be worth to me.

That's not perfect expected utility maximization, since I don't have a conscious handle on the social factors that determine how valuable looking a bit better is to my career and happiness, nor have I solved all the problems of ontology of value. But it's a better algorithm than what many people subconsciously use: "Oh, there's a nice suit on sale- but I already spent a few hundred bucks renewing my car insurance today, so I feel like I'm all spent out, so no", and later "Hey, there's a nice looking suit- someone just made me feel badly-dressed yesterday, so I'll buy it even if it's expensive".

I'll just quote Lukeprog's "Facing the Singularity, chapter 3" because I'm lazy.

...I stole this example from Julia Galef’s talk “The Straw Vulcan.” Her second example of “straw man rationality” or Hollywood Rationality is the idea that you shouldn’t make a decision until you have all the information you need. This one shows up in Star Trek, too. Giant space amoebas have appeared not far from the Enterprise, and Kirk asks Spock for his analysis. Spock replies: “I have no analysis due to insufficient information… The computers contain nothing on this phenomenon. It is beyond our experience, and the new information is not yet significant.”

Sometimes it’s rational to seek more information before acting, but sometimes you need to just act on what you think you know. You have to weigh the cost of getting more information with the expected value of that information. Consider another example from Gerd Gigerenzer, about a man considering whom to marry:

...He would have to look at the probabilities of various consequences of marrying each of them — whether the woman would still talk to him after they’re married, whether she’d take care of their children, whatever is important to him — and the utilites of each of these… After many years of research he’d probably find out that his final choice had already married another person who didn’t do these computations, and actually just fell in love with her.

Such behavior is irrational, a failure to make the correct value of information calculation.

Edit: I actually agree with you and muflax that some of this rationality-stuff is pretty problematic and/or impossible to apply in real life. But I think Yudkwosky's visit to New York is a bad example to illustrate these concerns.

Sometimes it’s rational to seek more information before acting, but sometimes you need to just act on what you think you know. You have to weigh the cost of getting more information with the expected value of that information.

How does this change anything? You are still left with an expected utility calculation. In this case it is the expected utility of gathering more information.

From the article:

If intuition will give you better results than slow, deliberative reasoning, then rationally you should use intuition.

Again, this changes nothing. In this case you will have to calculate the expected utility of using your intuition. Which seems just as impossible to me. All you can possible do is to use your intuition to decide if you should use your intuition.

Consider someone told you that their intuition says that they should not act on the information they have about risks from AI and that the value of seeking more information is too low because they don't expect to find any information that would change their mind at this point. Then how could you possible come to an agreement with them about risks from AI if you both rely on your intuitions?

The post you linked to talks a lot about "winning". But if you define rationality in terms of winning, then how exactly are you going to figure out what is "rational" without any information about how reliable your intuitions or heuristics are in a special situation?

The article seems to argue against a frequentist approach when it is the only way to decide which is the winning strategy. Otherwise, if you are not willing to wait for new information, you rely on your intuition in any case, whether your choice is to use the principle of expected utility or to rely on your intuition.

In other words, if a frequentist approach is impossible you could as well just say that you "feel" you are right. Not that it is rational to do it.

All you can possible do is to use your intuition to decide if you should use your intuition.

Yes, you have to start with something more basic than expected utility calculation, or you run into an infinite regress. Expected utility calculations are tools and you use them to achieve your goals more effectively. If you want to shoot yourself in the foot, nobody can prevent you from doing so.

Consider someone told you that their intuition says that they should not act on the information they have about risks from AI and that the value of seeking more information is too low because they don't expect to find any information that would change their mind at this point. Then how could you possible come to an agreement with them about risks from AI if you both rely on your intuitions?

You can't reach an agreement. Humans (or minds in general) with widely divergent intuitions or epistemological standards have very different beliefs and it can be impossible for them to come to an agreement. There are no universally compelling arguments that convince all possible minds.

I don't see how it's impossible to assign probabilities by using your intuitions. "Go ahead and pick a number out of the air, but then be very willing to revise it upon the slightest evidence that it doesn’t fit will with your other numbers."

Again, this changes nothing. In this case you will have to calculate the expected utility of using your intuition. Which seems just as impossible to me.

I totally agree that it's impossible exactly. So people use approximations everywhere. The trigger for the habit is thinking something like "Moving to California is a big decision." Then you think "Is there a possibility for a big gain if I use more deliberative reasoning?" Then, using a few heuristics, you may answer "yes." And so on, approximating at every step, since that's the only way to get anything done.

So people use approximations everywhere.

You mean something along the lines of what I have written here?

Hm, that seems to be more in the context of "patching over" ideas that are mostly right but have some problems. I'm talking about "fixing" theories that are exactly right but impossible to apply.

One of the more interesting experiences I've had learning about physics is how much of our understanding of physics is a massive oversimplification, because it's just too hard to calculate the optimal answer. Most nobel prize winning work comes not from new laws of physics, but from figuring out how to approximate those laws in a way that is complicated enough to be useful but just simple enough to be solvable. And so with rationality in this case, I think. The high-importance rationality work is not about new laws of rationality or strange but easy stuff, but about approximations of rationality that are complicated enough to be useful but simple enough to be solvable.

You don't need to predict the futures and evaluate the ultimate utilities of having different sums of money to switch in monty hall problem thanks to having been taught statistics, though.

I have posted here before that calculating real utility numbers for comparison is quite silly exercise. One can do a lot better by calculating the results of comparison - e.g. by comparing the futures against each other side to side - effectively evaluating just the expected difference in the utility, only to the point where the sign is known (note that when you start bolting heuristic improvements onto this approach to utility maximization you get improved behaviour that is not necessarily consistent with utility maximization).

With regards to flying to NY scenario, the expected utility difference is measurable in micro-deaths, i.e. is very small, and if there's no short-cut strategic values to maximize instead of the utility (such as money) one needs excessively precise calculations to decide on this action. Meaning that one could just as well do as one wishes. It's splitting hair really.

[-]XiXiDu-10

You don't need to predict the futures and evaluate the ultimate utilities of having different sums of money to switch in monty hall problem thanks to having been taught statistics, though.

I am not sure what you are saying here. What I am saying is that it is impossible to tell how much more utility you assign to world states where you own a car versus world states where you own a goat. You don't even know if you wouldn't be happier becoming a goatherder.

You can exchange car for goat plus other things, and you can't exchange goat for car plus other things, so you can figure out that you're better off getting the car. Maximizations of options for future decisions is a very solid heuristic that's more local in time.

You can exchange car for goat plus other things, and you can't exchange goat for car plus other things...

You seem to be moving the problem onto another level by partially grounding utility in material goods, respectively money, that can be exchanged for whatever it is that you really want.

I am much less troubled by expected utility maximization if utility is grounded in an empirical measure. The problem is, what measure are you going to use?

In a well-defined thought experiment like the Monty Hall problem it is relatively clear that a car would be the better choice because it can be exchanged for more of the other outcome. But in practice the problem is that there always are nearly infinitely many variables, actions and outcomes. It is hardly the best choice to maximize the measure that bears the label "utility" by taking part in a game show. So the question about the practical applicability of expected utility maximization, even as an approximation that deserves it's name, remains.

Anyway, once you defined utility in terms of an empirical measure you solved a lot of problems that I referred to in original post. But this opens another can of worms. Whether you define utility in terms of happines or in terms of money doesn't matter. You will end up maximizing the most useful quantity, e.g. computational resources. In other words, you'll be forced to ignore your complex values to maximize that which most closely resembles the definition of utility in terms of an empirical measure, that which in exchange can / could be transformed into / exchanged / traded for all other quantities.

Well the issue with 'utility maximization' is that people instantly think of some real valued number that is being calculated, compared, etc. That's not how it can possibly work in practice. In practice, you have unknowns; but you don't always need to assign defined numerical values to unknowns to compare expressions involving unknowns.

In the case of money, having more money results in no lower future utility than having less money, because in the future there's option to give up the money should they be found harmful - that's almost independent of how the utility function is defined.

Actually, think of chess as example. Final utility values are win, tie, and loss. A heuristic that all chess players use is to maximize the piece disbalance - have more pieces than opponent, better located perhaps, etc. - in the foreseeable future, if they can't foresee the end of game. This works for many games other than chess, which have different win conditions.

[-][anonymous]60

I've had a fair ammount of experience with goats. Trust me, you want none of it. Awful creatures. Go with the car. Or a bicycle, if you live in a place where they're practical. Or jumping stilts, if you want to travel in style. Really anything but goats.

P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.

That post reads like a perfect example of what can go wrong with self-modifying minds.

P.S. If you really want to know how I feel about Less Wrong then read the post 'Ontological Therapy' by user:muflax.

Ohhhh, thanks for the link, I love reading things like this. And it even quotes me.

Ohhhh, thanks for the link, I love reading things like this. And it even quotes me.

Maybe you'll enjoy the following as well:

...if an infinite selection of all logically possible universes exists, then many of them will contain gods, if gods are logically possible.

Probability combined with the law of large numbers combined with the realities of cosmological scales of space and time entails some very weird things. Which are nevertheless certainly true. I’m not speaking of Nick Bostrom’s bizarre argument that we must be living in a simulated universe (Are you Living in a Simulation?), which doesn’t really work, because it requires accepting the extremely implausible premise that most civilizations will behave in the most horrifically immoral way imaginable, and for no practical reason whatever (in all good sense, by far almost all sims that anyone will ever generate will be games and paradises, not countless trillions of aimlessly tedious worlds with thousands of years of pointless wars, holocausts, plagues, and famines). Rather, I’m speaking of Boltzmann Brains.

If the universe were to slowly expand forever, even if it were to fade into a heat death of total equilibrium, even then, simply due to the laws of probability, the random bouncing around of matter and energy would inevitably assemble a working brain. Just by chance. It’s only a matter of time. Maybe once every trillion trillion years in any expanse of a trillion trillion light years. But inevitably. And in fact, it would happen again and again, forever. So when all is said and done, there will be infinitely many more Boltzmann brains created in this universe than evolved brains like ours. The downside, of course, is that by far nearly all these brains will immediately die in the icy vacuum of space (don’t worry, by far most of these won’t survive long enough to experience even one moment of consciousness). And they would almost never have any company.

...

But the worlds lucky enough to get them will experience some pretty cool, or some pretty horrific, fates. In some, this god will be randomly evil and create civilizations just to torment them for fun (and let me reiterate: this may already have happened; in fact it may already be happening right now, in universes or regions of spacetime vastly beyond ours). In others, this god will be randomly awesome and create a paradise for his gentle children.

...

This will happen. It probably already has happened. It probably is happening as I type this. It’s a logically necessary truth.

Which reminds me of the following:

If the Universe is Spatially Infinite…

…there are an infinite number of identical copies of you on an infinite number of identical copies of Earth. You all always make identical decisions.

…there are an infinite number of identical copies of Earth, except that each of them is also occupied by Thor.

…as above, but it’s the Thor from Marvel Comics.

…there are an infinite number of Earths with alternate histories because they have dragons on them.

…on an infinite number of those Earths, the dragons are all nazis.

…billions of times every second, an infinite number of identical copies of you spring into existence in the depths of space and immediately die freezing and suffocating.

…there are an infinite number of people who are just like you except they’re serial killers.

…identical copies of everyone you love are being tortured to death right now.

…by identical copies of you.

…there’s still no god.

…there’s no hope of ever fixing the universe’s horrors, because if it were possible it would have been done already.

…an infinite number of identical copies of me are hoping that the universe isn’t infinite.

Bostron's Quantity of experience: brain-duplication and degrees of consciousness comes to the rescue again. Beware of basilisks (the ones here are different, but no less numerous).

Thanks, but those are significantly less interesting.

[-]Shmi20

As Kaj_Sotala pointed out, your mind makes decisions all the time, and so is an excellent optimizer already, albeit a subconscious one. Your job as an aspired rationalist is to feed it quality inputs and navigate through cognitive biases, then trust it to do its closed-source magic to come up with one or more alternatives.

Here's another example of how the utility maximization doesn't work by just calculating utilities of 2 futures then comparing.

There is a random number N on a paper in an envelope, it is entirely uncorrelated with your choice (not newcomb's paradox). You don't know the probability distribution or anything else about that number. You can choose between receiving $1000 prise if N>0 or receiving $1000 prize if N>1 . Obviously you should choose former; even though you don't know expected utilities of either choice you know one is greater than other (technically, greater or equal). (Also even if you are not decided yet what to do with $1000, considering that you will have option to give it up in the future, you can see that utility of future with you having $1000 is no less than utility of future with you not having $1000) . One doesn't simply compare reals to maximize utility. There's no need to assign some made up values to arrive at correct answers.

(The algebra drills at schools ought to help you understand that you don't need to assign made up values to unknowns)

[-][anonymous]00

Explicitly working out the probabilities for a situation is not always possible or desirable for real world situations. We're just not that smart.

You should not, however, take that to mean that decision theory cannot work at all. You should certainly, if you find yourself losing money, stop and do the math and find out if you're being dutch booked. You should certainly do the math on investments to find out if they have positive expected utility. Anywhere that you have real numbers, probability theory suddenly outstrips all the hazy heuristics you normally rely on.

It can even be applied closer to home. When I'm looking at whether or not to drop a class, I plot a probability distribution for my past grades, figure out the scores I need to pass, and compute, if I continue to perform at past levels, my probability of passing the class with the minimum grade to maintain my scholarships. Then I figure out how much money I would pay to suddenly, magically, have passed the class, how much money someone would have to pay me to take the class, and how much someone would have to pay me to do the paperwork to switch. If the math doesn't wash, I drop the class, study up over the break, and re-take it the next semester with a better utility forecast.

He then has to figure out each and every possible action he might take, and study all of their logical implications

Not all of their logical implications - the idea is to make use of tree pruning.

To do so he has to figure out the probability of each world state.

...except the ones you have dispensed with via tree pruning.

This further requires him to come up with a prior probability for each case and study all available data.

Not really - people typically work with whatever data they already have available in their existing world model.

As Kaj said, your brain is wired up to do a lot of this sort of thing unconsciously - and you should make use of the existing circuitry where you can.