LucasSloan comments on Welcome to Heaven - Less Wrong

23 Post author: denisbider 25 January 2010 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 26 January 2010 01:02:21AM 13 points [-]

denis, most utilitarians here are preference utilitarians, who believe in satisfying people's preferences, rather than maximizing happiness or pleasure.

To those who say they don't want to be wireheaded, how do you really know that, when you haven't tried wireheading? An FAI might reason the same way, and try to extrapolate what your preferences would be if you knew what it felt like to be wireheaded, in which case it might conclude that your true preferences are in favor of being wireheaded.

Comment author: ciphergoth 26 January 2010 01:06:06AM 6 points [-]

To those who say they don't want to be wireheaded, how do you really know that, when you haven't tried wireheading?

But it's not because I think there's some downside to the experience that I don't want it. The experience is as good as can possibly be. I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

Comment author: byrnema 26 January 2010 01:26:39AM *  5 points [-]

I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

The FAI can make you feel as though you "think things and do stuff", just by changing your preferences. I don't think any reason beginning with "I want" is going to work, because your preferences aren't fixed or immutable in this hypothetical.

Anyway, can you explain why you are attached to your preferences? That "it's better to value this than value that" is incoherent, and the FAI will see that. The FAI will have no objective, logical reason to distinguish between values you currently have and are attached to and values that you could have and be attached to, and might as well modify you than modify the universe. (Because the universe has exactly the same value either way.)

Comment author: LucasSloan 26 January 2010 01:30:48AM 3 points [-]

If any possible goal is considered to have the same value (by what standard?), then the "FAI" is not friendly. If preferences don't matter, then why does them not mattering matter? Why change one's utility function at all, if anything is as good as anything else?

Comment author: byrnema 26 January 2010 02:21:56AM *  2 points [-]

Well I understand I owe money to the Singularity Institute now for speculating on what the output of the CEV would be. (Dire Warnings #3)

Comment author: timtyler 26 January 2010 10:22:37AM *  2 points [-]

That page said:

"None may argue on the SL4 mailing list about the output of CEV".

A different place, with different rules.