shminux comments on A (small) critique of total utilitarianism - Less Wrong

36 Post author: Stuart_Armstrong 26 June 2012 12:36PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (237)

You are viewing a single comment's thread.

Comment author: shminux 25 June 2012 05:08:47PM 3 points [-]

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so.

I dare to say that no self-professed "total utilitarian" actually aliefs this.

Comment author: Lukas_Gloor 25 June 2012 08:02:38PM *  3 points [-]

I know total utilitarians who'd have no problem with that. Imagine simulated minds instead of carbon-based ones. If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I'm not a total utilitarian, but I don't think that's a particularly problematic aspect of it.

My problem with total hedonistic utiltiarianism is the following: Imagine a planet full of beings living in terrible suffering. You have the choice to either euthanize them all (or just make them happy), or let them go on living forever, while also creating a sufficiently huge number of beings with lives barely worth living somewhere else. Now that I find unacceptable. I don't think you do anything good by bringing a happy being into existence.

Comment author: Dolores1984 26 June 2012 11:56:46PM 3 points [-]

If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I'm not a total utilitarian, but I don't think that's a particularly problematic aspect of it.

As someone who plans on uploading eventually, if the technology comes around... no. Still feels like murder.

Comment author: Will_Sawin 26 June 2012 10:16:18PM 2 points [-]

This is problematic. If bringing a happy being into existence doesn't do anything good, and bringing a neutral being into existence doesn't do anything bad, what do you do when you switch a planned neutral being for a planned happy being? For instance, you set aside some money to fund your unborn child's education at the College of Actually Useful Skills.

Comment author: Lukas_Gloor 26 June 2012 10:36:03PM *  0 points [-]

Good catch, I'm well aware of that. I didn't say that I think bringing a neutral being into existence is neutral. If the neutral being's life contains suffering, then the suffering counts negatively. Prior-existence views seem to not work without the inconsistency you pointed out. The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism. Which has its own repugnant conclusions (e.g. anti-natalism), but for several reasons I find those easier to accept.

Comment author: Stuart_Armstrong 27 June 2012 08:52:28AM 1 point [-]

The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism

As I said, any preferences that can be cast into utility function form are consistent. You seem to be adding extra requirements for this "consistency".

Comment author: Lukas_Gloor 27 June 2012 11:57:18AM *  -1 points [-]

I should qualify my statement. I was talking only about the common varieties of utilitarianism and I may well have omitted consistent variants that are unpopular or weird (e.g. something like negative average preference-utilitarianism). Basically my point was that "hybrid-views" like prior-existence (or "critical level" negative utiltiarianism) run into contradictions. Most forms of average utilitarianism aren't contradictory, but they imply an obvious absurdity: A world with one being in maximum suffering would be [edit:] worse than a world with a billion beings in suffering that's just slightly less awful.

Comment author: APMason 27 June 2012 01:07:58PM 1 point [-]

That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".

Comment author: Lukas_Gloor 27 June 2012 02:11:47PM -1 points [-]

Indeed, thanks.

Comment author: Stuart_Armstrong 27 June 2012 12:28:29PM 1 point [-]

I'm still vague on what you mean by "contradictions".

Comment author: Lukas_Gloor 27 June 2012 02:10:10PM 0 points [-]

Not in the formal sense. I meant for instance what Will_Savin pointed out above, a neutral life (a lot of suffering and a lot of happiness) being equally worthy of creating as a happy one (mainly just happiness, very little suffering). Or for "critical levels" (which also refers to the infamous dust specks), see section VI of this paper, where you get different results depending on how you start aggregating. And Peter Singer's prior-existence view seems to contain a "contradiction" (maybe "absurdity" is better) as well having to do with replaceability, but that would take me a while to explain. It's not quite a contradiction that the theory states "do X and not-X", but it's obvious enough that something doesn't add up. I hope that led to some clarification, sorry for my terminology.

Comment author: Will_Sawin 26 June 2012 10:38:55PM 1 point [-]

Ah, I see. Anti-natalism is certainly consistent, though I find it even more repugnant.

Comment author: jkaufman 26 June 2012 03:06:48AM 0 points [-]

Assuming perfection in the methods, ending N lives and replacing them with N+1 equally happy lives doesn't bother me. Death isn't positive or negative except in as much as it removes the chance of future joy/suffering by the one killed and saddens those left behind.

With physical humans you won't have perfect methods and any attempt to apply this will end in tragedy. But with AIs (emulated brains or fully artificial) it might well apply.