Ivan_Tishchenko comments on Not for the Sake of Pleasure Alone - Less Wrong

36 Post author: lukeprog 11 June 2011 11:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: Randaly 12 June 2011 01:52:29AM *  3 points [-]

Disclaimer: I don't think that maximizing pleasure is an FAI solution; however, I didn't find your arguments against it convincing.

With regards to the experience machine, further study has found that people's responses are generally due to status quo bias; a more recent study found that a slight majority of people would prefer to remain in the simulation.

With regards to the distinction between desire and pleasure: well, yes, but you seem to be just assuming that our desires are what ought to be satisfied/maximized instead of pleasure; I would assume that many of the people you're talking with take pleasure to be a terminal value (or, at least, I used to, and I'm generalizing from one example and all that).

Comment author: Ivan_Tishchenko 12 June 2011 04:30:57AM *  1 point [-]

more recent study found that a slight majority of people would prefer to remain in the simulation.

I believe lukeprog was talking about what people think before they get wireheaded. It's very probable that once one gets hooked to that machine, one changes ones mind -- based on new experience.

It's certainly true for rats which could not stop hitting the 'pleasure' button, and died of starvation.

This is also why people have that status quo bias -- no one wants to die of starving, even with 'pleasure' button.

Comment author: teageegeepea 13 June 2011 02:10:14PM 1 point [-]

Isn't there a rule of Bayesianism that you shouldn't be able to anticipate changing your mind in a predictable manner, but rather you should just update right now?

Perhaps rather than asking will you enter or leave the simulation it might be better to start with a person inside it, remove them from it, and then ask them if they want to go back.

Comment author: Vaniver 13 June 2011 02:34:18PM 5 points [-]

Isn't there a rule of Bayesianism that you shouldn't be able to anticipate changing your mind in a predictable manner, but rather you should just update right now?

Changing your mind based on evidence and experiences are different. I am confident that if I eat a meal, my hunger will decrease. Does that mean I should update my hunger downward now without eating?

I can believe "If I wireheaded I would want to continue wireheading" and "I currently don't want to wirehead" without contradiction and without much pressure to want to wirehead.

Comment author: AmagicalFishy 17 June 2011 01:25:02PM *  0 points [-]

Changing your mind based on evidence and experiences are different. I am confident that if I eat a meal, my hunger will decrease. Does that mean I should update my hunger downward now without eating?

One's hunger isn't really an idea of the mind that one can change, yeah? I'd say that "changing your mind" (at least regarding particular ideas and beliefs) is different than "changing a body's immediate reaction to a physical state" (like lacking nourishment: hunger).

Comment author: Will_Sawin 17 June 2011 03:39:02PM 2 points [-]

If you conducted brain surgery on me I might want different things. I should not want those things now - indeed, I could not, since there are multiple possible surgeries.

"Wireheading" explicitly refers to a type of brain surgery, involving sticking wires in ones head. Some versions of it may not be surgical, but the point stands.

Comment author: barrkel 16 June 2011 06:21:00AM 0 points [-]

I think we're talking about an experience machine, not a pleasure button.

Comment author: Zetetic 12 June 2011 05:46:18AM *  0 points [-]

This is also why people have that status quo bias -- no one wants to die of starving, even with 'pleasure' button.

It was my understanding that the hypothetical scenario ruled this out (hence the abnormally long lifespan).

In any event, an FAI would want to maximize its utility, so making its utility contingent on the amount of pleasure going on it seems probable that it would want to make as many humans as possible and make them live as long as possible in a wirehead simulation.