Comment author: CalmCanary 10 December 2013 07:56:11PM 0 points [-]

Presumably, if you use E to decide in Newcomb's soda, the decisions of agents not using E are screened off, so you should only calculate the relevant probabilities using data from agents using E. If we assume E does in fact recommend to eat the chocolate ice cream, 50% of E agents will drink chocolate soda, 50% will drink the vanilla soda (assuming reasonable experimental design), and 100% will eat the chocolate ice cream. Therefore, given that you use E, there is no correlation between your decision and receiving the $1,000,000, so you might as well eat the vanilla and get the $1000. Therefore E does not actually recommend eating the chocolate ice cream.

Note that this reasoning does not generalize to Newcomb's problem. If E agents take one box, Omega will predict that they will all take one box, so they all get the payoff and the correlation survives.

Comment author: Yosarian2 17 November 2013 04:09:55PM *  0 points [-]

It seems like there's a fairly simple solution to the problem. Instead of thinking of utilitarianism as the sum of the utility value of all sentient beings, why not think of it in terms of increasing the average amount of utility value of all sentient beings, with the caveat that it is also unethical to end the life of any currently existing sentient being.

There's no reason that thinking of it as a sum is inherently more rational then thinking of it as an average. Of course, like I said, you have to add the rule that you can't end the currently existing life of intelligent beings just to increase the average happiness, or else you get even more repugnant conclusions. But with that rule, it seems like you get overall better conclusions then if you think of utility as a sum.

For example, I don't see why we have any specific ethical mandate to bring new intelligent life into the world, and in fact I would think that it would only be ethically justified if that new intelligent being would have a happiness level at least equal to the average for the world as a whole. (IE: you shouldn't have kids unless you think you can raise them at least as well as the average human being would.)

Comment author: CalmCanary 17 November 2013 06:23:17PM 0 points [-]

Are you saying we should maximize the average utility of all humans, or of all sentient beings? The first one is incredibly parochial, but the second one implies that how many children we should have depends on the happiness of aliens on the other side of the universe, which is, at the very least, pretty weird.

Not having an ethical mandate to create new life might or might not be a good idea, but average utilitarianism doesn't get you there. It just changes the criteria in bizarre ways.

View more: Prev