DanielLC comments on Stupid Questions Thread - January 2014 - Less Wrong

10 Post author: RomeoStevens 13 January 2014 02:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (293)

You are viewing a single comment's thread. Show more comments above.

Comment author: blacktrance 15 January 2014 08:39:23PM 0 points [-]

Average utilitarianism seems more plausible than total utilitarianism, as it avoids the repugnant conclusion. But what do average utilitarians have to say about animal welfare? Suppose a chicken's maximum capacity for pleasure/preference satisfaction is lower than a human's. Does this mean that creating maximally happy chickens could be less moral than non-maximally happy humans?

Comment author: DanielLC 16 January 2014 12:53:51AM 0 points [-]

My intuition is that chickens are less sentient, and that is sort of like thinking slower. Perhaps a year of a chicken's life is equivalent to a day of a human's. A day of a chicken's life adds less to the numerator than a day of a human's, but it also adds less to the denominator.

Comment author: Dan_Weinand 16 January 2014 07:13:53AM 1 point [-]

Maybe I'm way off base here, but it seems like average utilitarianism leads to disturbing possibility itself. That being 1 super happy person is considered a superior outcome to 1000000000000 pretty darn happy people. Please explain how, if at all, I'm misinterpreting average utilitarianism.

Comment author: DanielLC 17 January 2014 01:53:35AM 0 points [-]

I think you just have different intuitions than average utilitarians. I have talked to someone who saw no reason why having a higher population is good in of itself.

I am somewhat swayed by an anthropic argument. If you live in the first universe, you'll be super happy. If you live in the second, you'll be pretty darn happy. Thus, the first universe is better.

Comment author: DanArmak 18 January 2014 12:32:54PM 0 points [-]

On the other hand, you often need to consider that you're less likely to live in one universe than in another. For instance, if you could make 10% of population vastly happier by killing the other 90%, you need to factor in the 10% chances of survival.

Comment author: DanielLC 19 January 2014 03:23:36AM *  0 points [-]

I don't buy into that theory of identity. The way the universe works, observer-moments are arranged in lines. There's no reason this is necessary in principle. It could be a web where minds split and merge, or a bunch of Boltzmann brains that appear and vanish after a nanosecond. You are just a random one of the observer-moments. And you have to be one that actually exist, so there's a 100% chance of survival.

If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.

Comment author: DanArmak 19 January 2014 11:04:17AM 0 points [-]

You are just a random one of the observer-moments.

I don't think the word "you" is doing any work in that sentence.

Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it's meaningful to talk about identities (lines of observer-moments) which may die (the line ends).

If you did buy into that theory, that would result in a warped form of average utilitarianism, where you want to maximize the average value of the total utility of a given person.

I'm not sure I understand what you mean (I don't endorse average utilitarianism in any case). Do you mean that I might want to maximize the average of the utilities of my possible time-lines (due to imperfect knowledge), weighted by the probability of those time-lines? Isn't that just maximizing expected utility?

Comment author: DanielLC 19 January 2014 10:10:23PM 0 points [-]

Personal identity may not exist as an ontological feature on the low level of physical reality, but it does exist in the high-level of our experience and I think it's meaningful to talk about identities (lines of observer-moments) which may die (the line ends).

I don't think that's relevant in this context. You are a random observer. You live.

I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters. But if you just think that you're not going to have as much total happiness because you don't live as long then either you're fundamentally mistaken or the argument I just gave is.

I'm not sure I understand what you mean

If "you" are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn't help.

weighted by the probability of those time-lines?

If you're saying that it's more likely to be a person who has a longer life, then I guess our "different" views on identity probably are just semantics, and you end up with the form of average utilitarianism I was originally suggesting.

Comment author: DanArmak 20 January 2014 09:15:57AM 0 points [-]

You are a random observer.

That's very different from saying "you are a random observer-moment" as you did before.

I suppose if you consider it intrinsically important to be part of a long line of observers, then that matters.

I consider it intrinsically important to have a personal future. If I am now a specific observer - I've already observed my present - then I can drastically narrow down my anticipated future observations. I don't expect to be any future observer existing in the universe (or even near me) with equal probability; I expect to be one of the possible future observers who have me in their observer-line past. This seems necessary to accept induction and to reason at all.

If "you" are a random person, and this includes the entire lifespan, then the best universe would be one where the average person has a long and happy life, but adding more people wouldn't help.

But in the actual universe, when making decisions that influence the future of the universe, I do not treat myself as a random person; I know which person I am. I know about the Rawlsian veil, but I don't think we should have decision theories that don't allow to optimize the utility of observers similar to myself (or belonging to some other class), rather than all observers in the universe. We should be allowed to say that even if the universe is full of paperclippers who outnumber us, we can just decide to ignore their utilities and still have a consistent utilitarian system.

(Also, it would be very hard to define a commensurable 'utility function' for all 'observers', rather than just for all humans and similar intelligences. And your measure function across observers - does a lizard have as many observer-moments as a human? - may capture this intuition anyway.)

I'm not sure this is in disagreement with you. So I still feel confused about something, but it may just be a misunderstanding of your particular phrasing or something.

If you're saying that it's more likely to be a person who has a longer life,

I didn't intend that. I think I should taboo the verb "to be" in "to be a person", and instead talk about decision theories which produce optimal behavior - and then in some situations you may reason like that.

Comment author: DanielLC 20 January 2014 07:58:45PM 0 points [-]

That's very different from saying "you are a random observer-moment" as you did before.

I meant observer-moment. That's what I think of when I think of the word "observer", so it's easy for me to make that mistake.

If I am now a specific observer - I've already observed my present - then I can drastically narrow down my anticipated future observations.

If present!you anticipates something, it makes life easy for future!you. It's useful. I don't see how it applies to anthropics, though. Yous aren't in a different reference class than other people. Even if they were, it can't just be future!yous that are one reference class. That would mean that whether or not two yous are in the same reference class depends on the point of reference. First!you would say they all have the same reference class. Last!you would say he's his own reference class.

I do not treat myself as a random person; I know which person I am.

I think you do if you use UDT or TDT.