XiXiDu comments on Real-life expected utility maximization [response to XiXiDu] - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
(Quick comment before I go offline for today.)
Here is the problem. If I use expected utility maximization (EU) on big and unintuitive problems like existential risks and to decide what I should do about it; If I use EU to decide how to organize my life by and large; If I use EU to decide to pursue a terminal goal but then stop using it to decide what goals are instrumental in achieving the desired outcome, then how does it help to use EU at all? And otherwise, how do I decide where to draw the line?
People closely associated with SIAI/LW do use EU in support of their overall goals, yet ignore EU when it comes to flying to NY or writing a book about rationality:
-- Nick Bostrom
-- Eliezer Yudkowsky
-- Eliezer Yudkowsky
You can't be perfect but that doesn't mean that you can't do better. It also doesn't mean that you can do better. Maybe thinking about all this rationality business is pretty useless after all. But complaining that you can't perfectly apply expected utility is not a good argument for that.
They don't use EU in the sense of coming up with a big complicated model, plugging probabilities into it and then concluding "gee, option A has 13.743% larger expected utility than option B; A it is." I think they reasoned qualitatively and arrived at the conclusion that some subset of actions has much greater potential impact than others. You don't have to do precise calculations when comparing a mountain with a pebble. The references to expected utility made in those quotes don't read to me like claims that all the beliefs were arrived at using formal mathematical methods but rather a method to remind people of the counterintuitive fact that the magnitudes of outcomes should affect your decision.
It's unreasonable to say that unless you are a perfect reasoner yourself, you should never talk about the theoretical principles underlying perfect reasoning even when faced with simple situations where those principles can be applied trivially. Again, it can be argued that the decision to direct effort at existential risk mitigation isn't as overdetermined as it is claimed and so you should make some calculations before talking about expected utility in that context but it can't be argued by pointing out that Yudkowsky doesn't calculate the expected utility of plane trips.
TBH i don't see how EU is being used with regards to the friendly AI.
The arguments are so much based on pure guessing, that their external probabilities are very low, and the differences in the utilities really could be so low that someone could conceivably say 'I wouldn't give up $1 of mine to provide $1 million for an attempt to mitigate risk of UFAI, even if you argue that UFAI tortures every possible human mind-state'. [note: i presume literal $, not resources, so the global utility of creation of 1 million $ is zero]
The only way EU comes into play is the appeal to the purely intuitive feeling we get, that the efficacy of the FAI effort can't possibly be so low as to degrade such giant utility to the trivial level of "should i chew gum or not", or even unimaginably less than that. Unfortunately, though, it can. The AI design space is multi-dimensional and very huge. The intuitive feeling may be correct, or may be entirely wrong. There's a lot of fallacies - being graded for effort in education contributes to one, the just world fallacy contributes to another - which may throw the intuitive feeling way off.
By the same logic, birth control (of any kind, including simply not having sex) is like a murder -- removing an individual from the next generation is like removing an individual from this generation, right? If you know that your children would on average have lives worth living, and yet you refuse to reproduce as much as possible, you are a very bad person!
Or maybe there is a difference between killing an individual that exists, and not creating another hypothetical individual. In this sense, existential risk is bad because it kills all individuals existing at the time of the disaster, but the following hypothetical generations are irrelevant.
I am not sure what exactly is my position on this topic -- I feel that not having as many children as possible is not a crime, but the extinction of humanity (by any means, including all existing people deciding to abstain from reproduction) would be a huge loss. And I am not sure where to draw the line, also because I cannot estimate effects of e.g. doubling or halving the planet population. It probably depends on many other things, for example more people could do more science and improve their lives, but also could fight for scarcer resources, making their lives worse, and this fight and poverty could even prevent the science.
Perhaps in some sense, not having as many children as possible today is like a murder, but if it allows higher living standards, less wars, more science, etc., then it is just a sacrifice of the few for the benefit of many in the post-Singularity future, so... shut up and multiply (not biologically, heh), but this seems like a very dangerous line of thought.
I lean towards maybe having a parliamentary model of my preferences (that's the term Bostrom uses, but I'm not sure I'd use his decision theory, exactly) in which one voting bloc cares about the people who are still alive and one voting bloc cares about the continued survival of (trans)human civilization. This might require giving up an aspiration to expected utility maximization.