Viliam_Bur comments on Real-life expected utility maximization [response to XiXiDu] - Less Wrong

8 Post author: Gabriel 12 March 2012 07:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 13 March 2012 11:29:33AM *  0 points [-]

If you have that moral point of view that future generations matter in proportion to their population numbers, then you get this very stark implication that existential risk mitigation has a much higher utility than pretty much anything else that you could do.

By the same logic, birth control (of any kind, including simply not having sex) is like a murder -- removing an individual from the next generation is like removing an individual from this generation, right? If you know that your children would on average have lives worth living, and yet you refuse to reproduce as much as possible, you are a very bad person!

Or maybe there is a difference between killing an individual that exists, and not creating another hypothetical individual. In this sense, existential risk is bad because it kills all individuals existing at the time of the disaster, but the following hypothetical generations are irrelevant.

I am not sure what exactly is my position on this topic -- I feel that not having as many children as possible is not a crime, but the extinction of humanity (by any means, including all existing people deciding to abstain from reproduction) would be a huge loss. And I am not sure where to draw the line, also because I cannot estimate effects of e.g. doubling or halving the planet population. It probably depends on many other things, for example more people could do more science and improve their lives, but also could fight for scarcer resources, making their lives worse, and this fight and poverty could even prevent the science.

Perhaps in some sense, not having as many children as possible today is like a murder, but if it allows higher living standards, less wars, more science, etc., then it is just a sacrifice of the few for the benefit of many in the post-Singularity future, so... shut up and multiply (not biologically, heh), but this seems like a very dangerous line of thought.

Comment author: Nisan 13 March 2012 08:07:36PM 0 points [-]

I lean towards maybe having a parliamentary model of my preferences (that's the term Bostrom uses, but I'm not sure I'd use his decision theory, exactly) in which one voting bloc cares about the people who are still alive and one voting bloc cares about the continued survival of (trans)human civilization. This might require giving up an aspiration to expected utility maximization.