hen comments on Not for the Sake of Happiness (Alone) - Less Wrong

48 Post author: Eliezer_Yudkowsky 22 November 2007 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: sjmp 15 May 2013 08:51:45PM *  0 points [-]

Far be it for me to tell anyone what maximallly happy existence is. I'm sure AI with full understanding of human physiology can figure that out.

I would venture to guess that it would not include constant stream of events the person undergoing the simulation would write on a paper under the title happy stuff, but some minor setbacks might be included for perspective, maybe even a big event like cancer which the person under simulation would manage to overcome?

Or maybe it's the person under simulation sitting in empty white space while the AI maximally stimulates the pleasure centers of the brain until heat death of the universe.

Comment author: [deleted] 15 May 2013 09:10:15PM 0 points [-]

This suggestion might run into trouble if the 'maximally happy state' should have necessary conditions which exclude being in a simulation. Suppose being maximally happy meant, I donno, exploring and thinking about the universe their lives with other people. Even if you could simulate this perfectly, just the fact that it was simulated would undermine the happiness of the participants. It's at least not obviously true that you're happy if you think you are.

Comment author: sjmp 15 May 2013 09:41:26PM 2 points [-]

I don't really see how that could be the case. For the people undergoing the simulation, everything would be just as real as this current moment is to you and me. How can there be a condition for maximally happy sate that excludes being in simulation, when this ultra advaced AI is in fact giving you the exact same nerve signals that you would get if you'd experience things in simulation in real life?