You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jkaufman comments on Open thread, Dec. 1 - Dec. 7, 2014 - Less Wrong Discussion

3 Post author: MrMind 01 December 2014 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (346)

You are viewing a single comment's thread. Show more comments above.

Comment author: jkaufman 01 December 2014 12:05:28PM *  17 points [-]

Say you're undergoing surgery, and as part of this they use a kind of sedation where your mind completely stops. Not just stops getting input from the outside world, no brain activity whatsoever. Once you're sedated, is there any moral reason to finish the surgery?

Say we can run people on computers, we can start and stop them at any moment, but available power fluctuates. So we come up with a system where when power drops we pause some of the people, and restore them once there's power again. Once we've stopped someone, is there a moral reason to start them again?

My resolution to both of these cases is that I apparently care about people getting the experience of living. People dying matters in that they lose the potential for future enjoyment of living, their friends lose the enjoyment of their company, and expectation of death makes people enjoy life less. This makes death different from brain-stopping surgery, emulation pausing, and also cryonics.

(But I'm not signed up for cryonics because I don't think the information would be preserved.)

Comment author: MockTurtle 02 December 2014 10:31:29AM -1 points [-]

Thinking about it this way also makes me realise how weird it feels to have different preferences for myself as opposed to other people. It feels obvious to me that I would prefer to have other humans not cease to exist in the ways you described. And yet for myself, because of the lack of a personal utility function when I'm unconscious, it seems like the answer could be different - if I cease to exist, others might care, but I won't (at the time!).

Maybe one way to think about it more realistically is not to focus on what my preferences will be then (since I won't exist), but on what my preferences are now, and somehow extend that into the future regardless of the existence of a personal utility function at that future time...

Thanks for the help!