AlexanderRM comments on Failed Utopia #4-2 - Less Wrong

52 Post author: Eliezer_Yudkowsky 21 January 2009 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (247)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 30 May 2013 05:18:11AM 9 points [-]

Easier to do by just squishing someone, actually.

Comment author: AlexanderRM 08 October 2015 09:24:32PM 0 points [-]

I assume what Will_Pearson meant to say was "would not regret making this wish", which fits with the specification of "I is the entity standing here right now". Basically such that: if before finishing/unboxing the AI, you had known exactly what would result from doing so, you would still have built the AI. (and it's supposed the find out of that set of possibly worlds the one you would most like, or... something along those lines)) I'm not sure that would rule out every bad outcome, but... I think it probably would. Besides the obvious "other humans have different preferences from the guy building the AI"- maybe the AI is ordered to do a similar thing for each human individually- can anyone think of ways this would go badly?