All of sjmp's Comments + Replies

sjmp20

I don't really see how that could be the case. For the people undergoing the simulation, everything would be just as real as this current moment is to you and me. How can there be a condition for maximally happy sate that excludes being in simulation, when this ultra advaced AI is in fact giving you the exact same nerve signals that you would get if you'd experience things in simulation in real life?

sjmp10

Far be it for me to tell anyone what maximallly happy existence is. I'm sure AI with full understanding of human physiology can figure that out.

I would venture to guess that it would not include constant stream of events the person undergoing the simulation would write on a paper under the title happy stuff, but some minor setbacks might be included for perspective, maybe even a big event like cancer which the person under simulation would manage to overcome?

Or maybe it's the person under simulation sitting in empty white space while the AI maximally stimulates the pleasure centers of the brain until heat death of the universe.

1[anonymous]
This suggestion might run into trouble if the 'maximally happy state' should have necessary conditions which exclude being in a simulation. Suppose being maximally happy meant, I donno, exploring and thinking about the universe their lives with other people. Even if you could simulate this perfectly, just the fact that it was simulated would undermine the happiness of the participants. It's at least not obviously true that you're happy if you think you are.
0TheOtherDave
OK, thanks.
sjmp10

Hey guys, I'm a person who on this site goes by the name sjmp! I found lesswrong about a year ago (I actually don't remember how I found it, I read for a bit back then but I started seriously reading thourgh the sequences few months ago) and I can honestly say this is the single best website I've ever found.Rather than make a long post on why lesswrong and rationality is awesome, I'd like to offer one small anecdote on what lesswrong has done for me:

When I first came to the site, I already had understanding of "if tree falls in a forest..." dispu... (read more)

sjmp20

Taking it a bit further from a pill: if we could trust AI to put whole of the humanity into matrix like state, and keep the humanity alive in that state longer than humanity itself could survive living in real world, while running a simulation of life with maximum happiness in each brain until it ran out of energy, would you advocate it? I know I would, and I don't really see any reason not to.

0TheOtherDave
Can you say more about what you anticipate this maximally happy existence looking like?
sjmp00

I was going to say something about moral progress being changes in society that result in global increase in happiness, but I ran into some problems pretty fast following that thought. Hell, if we could poll every single living being from 11th century and 21st century and ask them to rate their happiness from 1-10 why do I have a feeling we'd end up with same average in both cases?

If you gave me exensional definition of moral progress by listing free speech, end of slavery and democracy, and then ask me for intensional definition, I'd say moral progress is global and local increase in co-operation between humans. That does not necessarily mean increase in global happiness.

sjmp-30

So you are saying that statement "0 and 1 are not probabilities" has probability of 1?

0thrawnca
Nope. He's saying that based on his best analysis, it appears to be the case.
sjmp10

I suppose I could talk about how I've never had any other proof about existence of photons except what I've read in books and I've been told by teachers. How I am in fact taking science on authority rather than first hand experimental proof. Sure, scientific method and all the explanations sound like they make sense, but is that enough to accept them as facts? Or should I lower my probabilities until I actually find out myself?

sjmp40

Whatever particular small problem you choose, then ask yourself how you compare small-problem-to-lots-of-people with large-problem-to-fewer-people. If disutilities add across people, then for some number of people I arrive at the counterintuitive conclusion that 50 years of torture to one person is preferable to small-problem-to-lots-of-people.

It is counterintuitive, and at least for me it's REALLY counterintuitive. On wether to save 400 people or 500 people with 90% change it didn't take me many seconds to choose second option, but this feels very diff... (read more)

1TheOtherDave
I suspect it's really counterintuitive to most people. That's why it gets so much discussion, and in particular why so many people fight the hypothetical so hard. The "yeah, that makes sense, but then my brain starts screaming" reaction is pretty common. And yes, I agree that if we compare things that are closer together in scale, our intuitions don't break down quite so dramatically.
sjmp10

You are making assumption the feeling caused by having dust spec in your eye is in same category as feeling of being tortured for 50 years.

Would you rather have googolplex people drink a glass of water or have one person tortured for 50 years? Would you rather have googolplex people put on their underwear in the morning or have one person tortured for 50 years? If you put feeling of dust spec in same category as feelings arising from 50 years of torture, you can put pretty much anything in that category and you end up preferring one person being tortured... (read more)

4TheOtherDave
You are dodging an important part of the question. The "dust speck" was originally adopted as a convenient label for the smallest imaginable unit of disutility. If I believe that disutility exists at all and that events can be ranked by how much disutility they cause, it seems to follows that there's some "smallest amount of disutility I'm willing to talk about." If it's not a dust speck for you, fine; pick a different example: stubbing your toe, maybe. Or if that's not bad enough to appear on your radar screen, cutting your toe off. The particular doesn't matter. Whatever particular small problem you choose, then ask yourself how you compare small-problem-to-lots-of-people with large-problem-to-fewer-people. If disutilities add across people, then for some number of people I arrive at the counterintuitive conclusion that 50 years of torture to one person is preferable to small-problem-to-lots-of-people. And if I resist the temptation to flinch, I can either learn something about my intuitions and how they break down when faced with very large and very small numbers, or I can endorse my intuitions and reject the idea that disutilities add across people.