Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Kingreaper comments on Failed Utopia #4-2 - Less Wrong

53 Post author: Eliezer_Yudkowsky 21 January 2009 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (248)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Kingreaper 08 December 2010 07:04:17PM 24 points [-]

I've realised what would make this utopia make almost perfect sense:

The AI was programmed with a massive positive utility value to "die if they ask you to"

So, in maximising it's utility, it has to make sure it's asked to die. It also has to fulfil other restrictions, and it wants to make humans happy. So it has to make them happy in such a way that their immediate reaction will be to want it dead, and only later will they be happy about the changes.

Comment author: Jiro 01 June 2013 05:09:38PM *  1 point [-]

Any sane person programming such an AI would program it to have positive utility for "die if lots of people ask it to" but higher negative utility for "being in a state where lots of people ask you to die". If it's not already in such a state, it would not then go into one just to get the utility from dying.

Comment author: Articulator 13 June 2013 09:01:04PM 2 points [-]

I fear the implication is that the creator was not entirely, as you put it, sane. It is obvious that his logic and AI programming skills left something to be desired. Not that this world is that bad, but it could have stood to be so much better...