Wei_Dai comments on Welcome to Heaven - Less Wrong

23 Post author: denisbider 25 January 2010 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 26 January 2010 03:37:55AM 11 points [-]

The experience is as good as can possibly be.

You don't know how good "as good as can possibly be" is yet.

I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

But surely the cost in happiness that you're willing to accept isn't infinite. For example, presumably you're not willing to be tortured for a year in exchange for a year of thinking and doing stuff. Someone who has never experienced much pain might think that torture is no big deal, and accept this exchange, but he would be mistaken, right?

How do you know you're not similarly mistaken about wireheading?

Comment author: Kaj_Sotala 26 January 2010 10:11:34AM *  7 points [-]

How do you know you're not similarly mistaken about wireheading?

I'm a bit skeptical of how well you can use the term "mistaken" when talking about technology that would allow us to modify our minds to an arbitrary degree. One could easily fathom a mind that (say) wants to be wireheaded for as long as the wireheading goes on, but ceases to want it the moment the wireheading stops. (I.e. both prefer their current state of wireheadedness/non-wireheadedness and wouldn't want to change it.) Can we really say that one of them is "mistaken", or wouldn't it be more accurate to say that they simply have different preferences?

EDIT: Expanded this to a top-level post.

Comment author: CannibalSmith 26 January 2010 10:01:40AM 1 point [-]

The maximum amount of pleasure is finite too.

Comment author: ciphergoth 27 January 2010 08:40:49AM 0 points [-]

Interesting problem! Perhaps I have a maximum utility to happiness, which increasing happiness approaches asymptotically?

Comment author: Wei_Dai 30 January 2010 03:00:19AM *  0 points [-]

Perhaps I have a maximum utility to happiness, which increasing happiness approaches asymptotically?

Yes, I think that's quite possible, but I don't know whether it's actually the case or not. A big question I have is whether any of our values scales up to the size of the universe, in other words, doesn't asymptotically approach an upper bound well before we used up the resources in the universe. See also my latest post http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ where I talk about some related ideas.