eli_sennesh comments on 'Effective Altruism' as utilitarian equivocation. - Less Wrong

1 Post author: Dias 24 November 2013 06:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (79)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 24 November 2013 11:20:59PM 0 points [-]

Yes, but we're talking about abstract ethical theories, so we're already playing as the AI. An AI designed to minimize frustrated preferences will find it easier (that is, a better ratio of value to effort) to wirehead than to kill, unless the frustration-reduction of killing an individual is greater than the frustration-creation happening to all the individuals who are now mourning, scared, screaming in pain from shrapnel, etc.

Comment author: Viliam_Bur 25 November 2013 04:27:08PM 1 point [-]

Step 1: Wirehead all the people.

Step 2A: Continue caring about them.

Step 2B: Kill them.

How exactly could the option 2A be easier than 2B? No one is mourning, because eveyone alive is wireheaded. And surely killing someone is less work than keeping them alive.

Comment author: komponisto 25 November 2013 02:44:46PM 1 point [-]

we're already playing as the AI

Doesn't matter. If humans can build an AI, an AI can build an AI as well.

Comment author: [deleted] 25 November 2013 02:59:40PM 0 points [-]

Yes, but the point is not to speculate about AI, it's to speculate about the particular ethical system in question, that being negative utilitarianism. You can assume that we're modelling an agent who faithfully implements negative utilitarianism, not some random paper-clipper.

Comment author: komponisto 25 November 2013 03:30:48PM 0 points [-]

Yes, and my claim is that, given the amount of suffering in the world, negative utilitarianism says that building a paperclipper is a good thing to do (provided it's sufficiently easy).

Comment author: [deleted] 25 November 2013 04:04:02PM -1 points [-]

Ok, again, let's assume we're already "playing as the AI". We are already possessed of superintelligence. Whatever we decide is negutilitarian good, we can feasibly do.

Given that, we can either wirehead everyone and eliminate their suffering forever, or rewrite ourselves as a paper-clipper and kill them.

Which one of these options do you think is negutilitarian!better?

Comment author: komponisto 25 November 2013 04:28:24PM *  0 points [-]

Which one of these options do you think is negutilitarian!better?

If the first is easier (i.e. costs less utility to implement), or if they're equally easy to implement, the first.

If the second is easier, it would depend on how much easier it was, and the answer could well be the second.

A superintelligence is still subject to tradeoffs.

But even if it turns out that wireheading is better on net than paperclipping, (a) that's not an outcome I'm happy with, and (b) paperclipping is still better (according to negative utilitarianism) than the status quo. This is more than enough to reject negative utilitarianism.

Comment author: [deleted] 25 November 2013 07:00:04PM 0 points [-]

Neither of us is happy with wireheading. Still, it's better to be accurate about why we're rejecting negutilitarianism.

Comment author: komponisto 25 November 2013 07:16:24PM 0 points [-]

The fact that it prefers paperclipping to the status quo is enough for me (and consistent with what I originally wrote).