You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dmytry comments on SMBC comic: poorly programmed average-utility-maximizing AI - Less Wrong Discussion

9 Post author: Jonathan_Graehl 06 April 2012 07:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (113)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dmytry 06 April 2012 05:23:28PM *  -2 points [-]

Well it seems to me they are trading N dust specks vs torture in Omelas. edit: Actually, I don't like Omelas [as example]. I think that miserable child would only make the society way worse, with the people just opting to e.g. kill someone when it ever so slightly results in increase in their personal expected utility. This child in Omelas puts them straight on the slippery slope, and making everyone aware of slippage makes people slide down for fun and profit.

Our 'civilization' though, of course, is a god damn jungle and so its pretty damn bad. It's pretty hard to beat on the moral wrongness scale, from first principles; you have to take our current status quo and modify it to get to something worse (or take our earlier status quo).

Comment author: WrongBot 06 April 2012 07:26:06PM *  1 point [-]

Your edit demonstrates that you really don't get consequentialism at all. Why would making a good tradeoff (one miserable child in exchange for paradise for everyone else) lead to making a terrible one (a tiny bit of happiness for one person in exchange for death for someone else)?