Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Fake Selfishness
Comment author: Tiiba2 08 November 2007 07:17:57AM 2 points [-]

"An altruist might ask the genie to maximize the amount of happiness in the universe or something like that, in which case the genie might create a huge number of wireheads. This seems to me like a bad outcome, and would likely be seen as a bad outcome by the altruist who made the request of the genie."

Eh? An altruist would voluntarily summon disaster upon the world?

By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive? Is it really so bad? If, when you imagine your brain rewired, you envision something that is too alien to be considered you, or too devoid of creative thought to be considered alive, it's possible that an AI ordered to make you happy would choose some other course of action. It would be illogical to create something that is neither you nor happy.

In response to comment by Tiiba2 on Fake Selfishness
Comment author: ragnarrahl 02 May 2017 02:57:00AM 0 points [-]

" Eh? An altruist would voluntarily summon disaster upon the world?" No, an altruist's good-outcomes are complex enough to be difficult to distinguish from disasters by verbal rules. An altruist has to calculate for 6 billion evaluative agents, an egoist just 1.

" By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive?" Wireheading is more or less where a sufficiently powerful agent told to optimize for happiness optimizes for the emotional referents without the intellectual and teleological human content typically associated with that.

You can perform primitive wireheading right now with various recreational drugs. The fact that almost everyone uses at least a few of the minor ones tells us that wireheading isn't in and of itself absolutely repugnant to everyone-- but the fact that only the desperate pursue the more major forms of wireheading available and the results (junkies) are widely looked upon as having entered a failure-mode is good evidence that it's not a path we want sufficiently powerful agents to go down.

" it's possible that an AI ordered to make you happy would choose some other course of action. " When unleashing forces one cannot un-unleash, one wants to deal in probability, not possibility. That's more or less the whole Yudkowskian project in a nutshell.

In response to comment by Stephen on Fake Selfishness
Comment author: NaomiLong 12 October 2011 08:32:38PM 1 point [-]

It seems like this is based more on the person's ability to optimize. The altruistic person who realized this flaw would then be able to (assuming s/he had the intelligence and rationality to do so) calculate the best possible wish to benefit the most number of people.

In response to comment by NaomiLong on Fake Selfishness
Comment author: ragnarrahl 02 May 2017 02:49:38AM 0 points [-]

Notice how you had to assume the altruist to have the extraordinary degree of intelligence and rationality to calculate the best possible wish and Stephen merely had to assume that the selfishness was of the goodwill-toward-men-if-it-doesn't-cost-me-anything sort? When you require less implausible assumptions to render a given ethical philosophy genie-resilient, the philosophy is more genie-resilient.