Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

NaomiLong comments on Fake Selfishness - Less Wrong

29 Post author: Eliezer_Yudkowsky 08 November 2007 02:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Stephen 08 November 2007 06:24:31AM 2 points [-]

Taking a cue from some earlier writing by Eli, I suppose one way to give ethical systems a functional test is to imagine having access to a genie. An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie. A selfish person might say to the genie "create the scenario I most want/approve of." Then it would be impossible for the genie to carry out some horrible scenario the selfish person doesn't want. For this reason selfishness wins some points in my book. If the selfish person wants the desires of others to be met (as many people do), I, as an innocent bystander, might end up with a scenario that I approve of too. (I think the only way to improve upon this is if the person addressing the genie has the desire to want things which they would want if they had an unlimited amount of time and intelligence to think about it. I believe Eli calls this "external reference semantics.")

Comment author: NaomiLong 12 October 2011 08:32:38PM 1 point [-]

It seems like this is based more on the person's ability to optimize. The altruistic person who realized this flaw would then be able to (assuming s/he had the intelligence and rationality to do so) calculate the best possible wish to benefit the most number of people.

Comment author: ragnarrahl 02 May 2017 02:49:38AM 0 points [-]

Notice how you had to assume the altruist to have the extraordinary degree of intelligence and rationality to calculate the best possible wish and Stephen merely had to assume that the selfishness was of the goodwill-toward-men-if-it-doesn't-cost-me-anything sort? When you require less implausible assumptions to render a given ethical philosophy genie-resilient, the philosophy is more genie-resilient.