Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

ragnarrahl comments on Fake Selfishness - box10.me on lesswrong.com

29 Post author: Eliezer_Yudkowsky 08 November 2007 02:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: ragnarrahl 02 May 2017 02:57:00AM 0 points [-]

" Eh? An altruist would voluntarily summon disaster upon the world?" No, an altruist's good-outcomes are complex enough to be difficult to distinguish from disasters by verbal rules. An altruist has to calculate for 6 billion evaluative agents, an egoist just 1.

" By the way, I have some questions about wireheading. What is it, really? Why is it so repulsive?" Wireheading is more or less where a sufficiently powerful agent told to optimize for happiness optimizes for the emotional referents without the intellectual and teleological human content typically associated with that.

You can perform primitive wireheading right now with various recreational drugs. The fact that almost everyone uses at least a few of the minor ones tells us that wireheading isn't in and of itself absolutely repugnant to everyone-- but the fact that only the desperate pursue the more major forms of wireheading available and the results (junkies) are widely looked upon as having entered a failure-mode is good evidence that it's not a path we want sufficiently powerful agents to go down.

" it's possible that an AI ordered to make you happy would choose some other course of action. " When unleashing forces one cannot un-unleash, one wants to deal in probability, not possibility. That's more or less the whole Yudkowskian project in a nutshell.