You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kyre comments on Distinction between "creating/preventing future lives" and "improving future lives that are already expected to exist"? - Less Wrong Discussion

5 Post author: ericyu3 12 August 2014 06:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kyre 13 August 2014 05:08:03AM 0 points [-]

Is there a separate name for "consequentialism over world histories" in comparison to "consequentialism over world states" ?

What I mean is, say you have a scenario where you can kill of person A and replace him with a happier person B. As I understand the terms, deontology might say "don't do it, killing people is bad". Consequentialism over world states would say "do it, utility will increase" (maybe with provisos that no-one notices or remembers the killing). Consequentialism over world histories would say "the utility contribution of the final state is higher with the happy person in it, but the killing event subtracts utility and makes a net negative, so don't do it".

Comment author: DanielLC 13 August 2014 06:04:24AM 4 points [-]

I don't know if there's a name for it. In general, consequentialism is over the entire timeline. You could value events that have a specific order, or value events that happen earlier, etc. I don't like the idea of judging based on things like that, but it's just part of my general dislike of judging based on things that cannot be subjectively experienced. (You can subjectively experience the memory of things happening in a certain order, but each instant of you remembering it is instantaneous, and you'd have no way of knowing if the instants happened in a different order, or even if some of them didn't happen.)

It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob. I was talking about preventing the existence of Alice to make way for Bob. Alice is not dying. I am removing the potential for her to exist. But potential is just an abstraction. There is not some platonic potential of Alice floating out in space that I just killed.

Due to loss aversion, losing the potential for Alice may seem worse than gaining the potential for Bob, but this isn't something that can be justified on consequentialist grounds.

Comment author: Kyre 14 August 2014 05:01:54AM 0 points [-]

I don't know if there's a name for it. In general, consequentialism is over the entire timeline.

Yes, that makes the most sense.

It seems likely that your post is due to a misunderstanding of my post, so let me clarify. I was not suggesting killing Alice to make way for Bob.

No no, I understand that you're not talking about killing people off and replacing them, I was just trying (unsuccessfully) to give the most clearest example I could.

And I agree with your consequentialist analysis of indifference between the creation of Alice and Bob if they have the same utility ... unless "playing god events" have negative utility.