handoflixue comments on Causal Universes - Less Wrong

60 Post author: Eliezer_Yudkowsky 29 November 2012 04:08AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (385)

You are viewing a single comment's thread. Show more comments above.

Comment author: handoflixue 29 November 2012 12:51:34AM 1 point [-]

If you could push a button and avert nuclear war, saving billions, would you?

Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?

Either way, you're choosing between two alternate time lines. I'm failing to grasp how the "cause" of the choice being time travel changes ones valuations of the outcomes.

Comment author: The_Duck 29 November 2012 02:51:34AM 4 points [-]

If you could push a button and avert nuclear war, saving billions, would you?

Of course.

Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?

Because if time travel works by destroying universes, it causes many more deaths than it averts. To be explicit about assumptions, if our universe is being simulated on someone's computer I think it's immoral for the simulator to discard the current state of the simulation and restart it from a modified version of a past saved state, because this is tantamount to killing everyone in the current state.

[A qualification: erasing, say, the last 150 years is at least as bad as killing billions of humans, since there's essentially zero chance that the people alive today will still exist in the new timeline. But the badness of reverting and overwriting the last N seconds of the universe probably tends to zero as N tends to zero.]

Comment author: RobbBB 29 November 2012 06:20:56AM 3 points [-]

But the cost of destroying this universe has to be weighed against the benefit of creating the new universe. Choosing not to create a universe is, in utilitarian terms, no more morally justifiable than choosing to destroy one.

Comment author: thomblake 29 November 2012 04:36:52PM 6 points [-]

Choosing not to create a universe is, in utilitarian terms, no more morally justifiable than choosing to destroy one.

That seems to be exactly the principle that is under dispute.

Comment author: RobbBB 29 November 2012 05:34:19PM 0 points [-]

So is the argument that we should give up utilitarianism? (If so, what should replace it?) Or is there some argument someone has in mind for why annihilation has a special disutility of its own, even when it is a necessary precondition for a slight resultant increase in utility (accompanying a mass creation)?

Comment author: The_Duck 30 November 2012 01:38:56AM 0 points [-]

I compute utility as a function of the entire future history of the universe and not just its state at a given time. I don't see why this can't fall under the umbrella of "utilitarianism." Anyway, if your utility function doesn't do this, how do you decide at what time to compute utility? Are you optimizing the expected value of the state of the universe 10 years from now? 10,000? 10^100? Just optimize all of it.

Comment author: RobbBB 30 November 2012 01:59:34AM 0 points [-]

I'm not disputing that we should factor in the lost utility from the future-that-would-have-been. I'm merely pointing out that we have to weigh that lost utility against the gained utility from the future-created-by-retrocausation. Choosing to go back in time means destroying one future, and creating another. But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree? If we weigh the future just as strongly as the present, why should we not also weigh a different timeline's future just as strongly as our own timeline's future, given that we can pick which timeline will obtain?

Comment author: The_Duck 30 November 2012 02:14:35AM *  0 points [-]

I'm not disputing that we should factor in the lost utility from the future-that-would-have-been.

The issue for me is not the lost utility of the averted future lives. I just assign high negative utility to death itself, whenever it happens to someone who doesn't want to die, anywhere in the future history of the universe. [To be clear, by "future history of the universe" I mean everything that ever gets simulated by the simulator's computer, if our universe is a simulation.]

That's the negative utility I'm weighing against whatever utility we gain by time traveling. My moral calculus is balancing

[Future in which 1 billion die by nuclear war, plus 10^20 years (say) of human history afterwards] vs. [Future in which 6 billion die by being erased from disk, plus 10^20 years (say) of human history afterwards].

I could be persuaded to favor the second option only if the expected value of the 10^20 years of future human history are significantly better on the right side. But the expected value of that difference would have to outweigh 5 billion deaths.

But choosing not to go back in time also means, in effect, destroying one future, and creating another. Do you disagree?

Yes, I disagree. Have you dedicated your life to having as many children as possible? I haven't, because I feel zero moral obligation toward children who don't exist, and feel zero guilt about "destroying" their nonexistent future.

Comment author: RobbBB 30 November 2012 02:32:12AM *  -1 points [-]

I would feel obliged to have as many children as possible, if I thought that having more children would increase everyone's total well-being. Obviously, it's not that simple; the quality of life of each child has to be considered, including the effects of being in a large family on each child. But I stick by my utilitarian guns. My felt moral obligation is to make the world a better place, including factoring in possible, potential, future, etc. welfares; my felt obligation is not just to make the things that already exist better off in their future occurrences.

Both of our basic ways of thinking about ethics have counter-intuitive consequences. A counter-intuitive consequence of my view is that it's no worse to annihilate a universe on a whim than it is to choose not to create a universe on a whim. I am in a strong sense a consequentialist, in that I consider utility to be about what outcomes end up obtaining and not to care a whole lot about active vs. passive harm.

Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn't be obvious that this has much moral relevance.

Your view also requires a third metaphysically tenuous assumption: that the future of my timeline has some sort of timeless metaphysical reality, and specifically a timeless metaphysical reality that other possible timelines lack. My view requires no such assumptions, since the relevant calculation can be performed in the same way even if all that ever exists is a succession of present moments, with no reification of the future or past or of any alternate timeline. Finally, my view also doesn't require assuming that there is some sort of essence each person possesses that allows his/her identity to persist over time; as far as I'm concerned, the universe might consist of the total annihilation of everything, followed by its near-identical re-creation from scratch at the next Planck time. Reality may be a perpetual replacement, rather than a continuous temporal 'flow;' the world would look the same either way. Learning that we live in a replacement-world would be a metaphysically interesting footnote, but I find it hard to accept that it would change the ethical calculus in any way.

Suppose we occupy Timeline A, and we're deciding whether to replace it with Timeline B. My calculation is:

  1. What is the net experiential well-being of Timeline A's future?
  2. What is the net experiential well-being of Timeline B's future?

If 1 is greater than 2, time travel is unwarranted. But it's not unwarranted in that case because the denizens of Timeline B don't matter. It's unwarranted because choosing not to create Timeline B prevents less net well-being than does choosing to destroy Timeline A.

Comment author: Ghatanathoah 15 August 2013 10:06:11AM *  0 points [-]

Your view is far more complicated, and leads to far more strange and seemingly underdetermined cases. Your account seems to require that there be a clear category of agency, such that we can absolutely differentiate actively killing from passively allowing to die. This, I think, is not scientifically plausible. It also requires a clear category of life vs. death, which I have a similarly hard time wrapping my brain around. Even if we could in all cases unproblematically categorize each entity as either alive or dead, it wouldn't be obvious that this has much moral relevance.

I think that we can generate an ethical system that fits The_Duck's intuition's quite well without having to use any of those concepts. All we need is a principle that it is better to have a small population with high levels of individual utility than to have a large population with small levels of individual utility, even if the total level of utility in the small population is lower.

Note that this is not "average utilitarianism." Average utilitarianism is an example of one extraordinarily bad attempt to mathematize this basic moral principle that fails due to the various unpleasant ways one can manipulate the average. Having a high average isn't valuable in itself, it's only valuable if it reflects that there is a smaller population of people with high individual utility.

This does not need any concept of agency. If someone dies and is replaced by a new person with equal or slightly higher levels of utility that is worse than if they had not died, regardless of whether the person died of natural causes or was killed by a thinking being. In the time travel scenario it does not matter whether one future is destroyed and replaced by a time traveler, or some kind of naturally occurring time storm, both are equally bad.

It does not need a clear-cut category of life vs. death. We can simply establish a continuum of undesirable changes that can happen to a mind, with the-thing-people-commonly-call-death being one of the most undesirable of all.

This continuum eliminates the need for this essence of identity you think is required as well. A person's identity is simply the part of their utility function that ranks the desirability of changes to their mind. (As a general rule, nearly any concept that humans care about a lot that seems incoherent in a reductionist framework can easily be steelmanned into something coherent).

As for the metaphyisical assumption about time, I thought that was built into the way time-travel was described in the thought experiment. We are supposed to think of time travel as restoring a simulation of the universe from a save point. That means that there is one "real" timeline that was actually simulated and others that were not, and won't be unless the save state is loaded.

Personally, I find this moral principle persuasive. The idea that all that matters is the total amount of utility is based on Parfit's analysis of the Non-Identity Problem, and in my view that analysis is deeply flawed. It is trivially easy to construct variations of the Non-Identity Problem where it is morally better to have the child with lower utility. I think that "All that matters is total utility" was the wrong conclusion to draw from the problem.

Comment author: DaFranker 29 November 2012 04:45:20PM 0 points [-]

I agree. That does seem to be a key point in the disagreement.

There doesn't seem to be an obvious way to compute the relevant utility function segments of the participants involved.

Comment author: Peterdjones 29 November 2012 07:01:02PM *  1 point [-]

OTOH "destroy the universe" is not a maxim one would wish to become universal law. Nor is it virtuous. It's clearly against the rights of those involved. Etc. Utilitiarianism seems to be performing particularly badly here. The more I read about it, the worse it gets.

Comment author: wedrifid 29 November 2012 01:06:10AM *  0 points [-]

If you could push a button and avert nuclear war, saving billions, would you?

Why does that answer change if the button works via transporting you back in time with the knowledge necessary to avert the war?

I probably would, but the choice is very different. I happen to know what did happen, including all the things that didn't happen. By changing that I am abandoning the gauruntee that something at least as good as the status quo occurs. Most critically, I risk things like delaying a nuclear war such that a war occurs a decade later with superior technology and so leads to an extinction outcome.