Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: [deleted] 15 May 2013 09:10:15PM 0 points [-]

This suggestion might run into trouble if the 'maximally happy state' should have necessary conditions which exclude being in a simulation. Suppose being maximally happy meant, I donno, exploring and thinking about the universe their lives with other people. Even if you could simulate this perfectly, just the fact that it was simulated would undermine the happiness of the participants. It's at least not obviously true that you're happy if you think you are.

In response to comment by [deleted] on Not for the Sake of Happiness (Alone)
Comment author: sjmp 15 May 2013 09:41:26PM 2 points [-]

I don't really see how that could be the case. For the people undergoing the simulation, everything would be just as real as this current moment is to you and me. How can there be a condition for maximally happy sate that excludes being in simulation, when this ultra advaced AI is in fact giving you the exact same nerve signals that you would get if you'd experience things in simulation in real life?

Comment author: TheOtherDave 15 May 2013 08:37:01PM 0 points [-]

Can you say more about what you anticipate this maximally happy existence looking like?

Comment author: sjmp 15 May 2013 08:51:45PM *  0 points [-]

Far be it for me to tell anyone what maximallly happy existence is. I'm sure AI with full understanding of human physiology can figure that out.

I would venture to guess that it would not include constant stream of events the person undergoing the simulation would write on a paper under the title happy stuff, but some minor setbacks might be included for perspective, maybe even a big event like cancer which the person under simulation would manage to overcome?

Or maybe it's the person under simulation sitting in empty white space while the AI maximally stimulates the pleasure centers of the brain until heat death of the universe.

Comment author: sjmp 15 May 2013 08:38:06PM 1 point [-]

Hey guys, I'm a person who on this site goes by the name sjmp! I found lesswrong about a year ago (I actually don't remember how I found it, I read for a bit back then but I started seriously reading thourgh the sequences few months ago) and I can honestly say this is the single best website I've ever found.Rather than make a long post on why lesswrong and rationality is awesome, I'd like to offer one small anecdote on what lesswrong has done for me:

When I first came to the site, I already had understanding of "if tree falls in a forest..." dispute and the question "But did it really make a sound?" did not linger in my mind and there was nothing mysterious or unclear to me about the whole affair. Yet I could most definitely remember a time long ago when I was extremely puzzled by it. I thought to myself, how silly that I was ever puzzled by something like that. What did puzzle me was free will. The question seemed Very Mysterious And Deep to me.

Can you guess what I'm thinking now? How silly that was I ever puzzled by the question "Do we have free will?"... Reducing free will surely is not as easy as reducing dispute about falling trees, but it does seem pretty obvious in hindsight ;)

Comment author: sjmp 15 May 2013 07:56:07PM 2 points [-]

Taking it a bit further from a pill: if we could trust AI to put whole of the humanity into matrix like state, and keep the humanity alive in that state longer than humanity itself could survive living in real world, while running a simulation of life with maximum happiness in each brain until it ran out of energy, would you advocate it? I know I would, and I don't really see any reason not to.

Comment author: sjmp 15 May 2013 02:03:36PM 0 points [-]

I was going to say something about moral progress being changes in society that result in global increase in happiness, but I ran into some problems pretty fast following that thought. Hell, if we could poll every single living being from 11th century and 21st century and ask them to rate their happiness from 1-10 why do I have a feeling we'd end up with same average in both cases?

If you gave me exensional definition of moral progress by listing free speech, end of slavery and democracy, and then ask me for intensional definition, I'd say moral progress is global and local increase in co-operation between humans. That does not necessarily mean increase in global happiness.

Comment author: sjmp 23 April 2013 12:34:07PM -2 points [-]

So you are saying that statement "0 and 1 are not probabilities" has probability of 1?

Comment author: sjmp 20 April 2013 03:59:04PM 1 point [-]

I suppose I could talk about how I've never had any other proof about existence of photons except what I've read in books and I've been told by teachers. How I am in fact taking science on authority rather than first hand experimental proof. Sure, scientific method and all the explanations sound like they make sense, but is that enough to accept them as facts? Or should I lower my probabilities until I actually find out myself?

In response to comment by sjmp on Circular Altruism
Comment author: TheOtherDave 17 April 2013 06:06:34PM 2 points [-]

You are dodging an important part of the question.

The "dust speck" was originally adopted as a convenient label for the smallest imaginable unit of disutility. If I believe that disutility exists at all and that events can be ranked by how much disutility they cause, it seems to follows that there's some "smallest amount of disutility I'm willing to talk about." If it's not a dust speck for you, fine; pick a different example: stubbing your toe, maybe. Or if that's not bad enough to appear on your radar screen, cutting your toe off. The particular doesn't matter.

Whatever particular small problem you choose, then ask yourself how you compare small-problem-to-lots-of-people with large-problem-to-fewer-people.

If disutilities add across people, then for some number of people I arrive at the counterintuitive conclusion that 50 years of torture to one person is preferable to small-problem-to-lots-of-people. And if I resist the temptation to flinch, I can either learn something about my intuitions and how they break down when faced with very large and very small numbers, or I can endorse my intuitions and reject the idea that disutilities add across people.

Comment author: sjmp 17 April 2013 07:46:24PM *  2 points [-]

Whatever particular small problem you choose, then ask yourself how you compare small-problem-to-lots-of-people with large-problem-to-fewer-people. If disutilities add across people, then for some number of people I arrive at the counterintuitive conclusion that 50 years of torture to one person is preferable to small-problem-to-lots-of-people.

It is counterintuitive, and at least for me it's REALLY counterintuitive. On wether to save 400 people or 500 people with 90% change it didn't take me many seconds to choose second option, but this feels very different. Now that you put it in terms of unit of disutily isntead of dust specks it is easier to think about, and on some level it does feel like torture of one person would be the logical choice. And then part of my mind starts screamin this is wrong.

Thanks for your reply though, I'll have to think about all this.

In response to Circular Altruism
Comment author: sjmp 17 April 2013 04:15:42PM *  1 point [-]

You are making assumption the feeling caused by having dust spec in your eye is in same category as feeling of being tortured for 50 years.

Would you rather have googolplex people drink a glass of water or have one person tortured for 50 years? Would you rather have googolplex people put on their underwear in the morning or have one person tortured for 50 years? If you put feeling of dust spec in same category as feelings arising from 50 years of torture, you can put pretty much anything in that category and you end up preferring one person being tortured for 50 years to to almost any physical phenomena that would happen to googolplex people.

And even if it's in same category? I bet that just having a thought causes some extremely small activity in brain areas related to pain. Multiply that by large enough number and the total pain value will be greater than pain value of person being tortured for 50 years! I would hope that there is no one who would prefer one person being tortured for 50 years to 3^^^3 persons having a thought...