Posts

Sorted by New

Wiki Contributions

Comments

I don't really see how that could be the case. For the people undergoing the simulation, everything would be just as real as this current moment is to you and me. How can there be a condition for maximally happy sate that excludes being in simulation, when this ultra advaced AI is in fact giving you the exact same nerve signals that you would get if you'd experience things in simulation in real life?

Far be it for me to tell anyone what maximallly happy existence is. I'm sure AI with full understanding of human physiology can figure that out.

I would venture to guess that it would not include constant stream of events the person undergoing the simulation would write on a paper under the title happy stuff, but some minor setbacks might be included for perspective, maybe even a big event like cancer which the person under simulation would manage to overcome?

Or maybe it's the person under simulation sitting in empty white space while the AI maximally stimulates the pleasure centers of the brain until heat death of the universe.

Hey guys, I'm a person who on this site goes by the name sjmp! I found lesswrong about a year ago (I actually don't remember how I found it, I read for a bit back then but I started seriously reading thourgh the sequences few months ago) and I can honestly say this is the single best website I've ever found.Rather than make a long post on why lesswrong and rationality is awesome, I'd like to offer one small anecdote on what lesswrong has done for me:

When I first came to the site, I already had understanding of "if tree falls in a forest..." dispute and the question "But did it really make a sound?" did not linger in my mind and there was nothing mysterious or unclear to me about the whole affair. Yet I could most definitely remember a time long ago when I was extremely puzzled by it. I thought to myself, how silly that I was ever puzzled by something like that. What did puzzle me was free will. The question seemed Very Mysterious And Deep to me.

Can you guess what I'm thinking now? How silly that was I ever puzzled by the question "Do we have free will?"... Reducing free will surely is not as easy as reducing dispute about falling trees, but it does seem pretty obvious in hindsight ;)

Taking it a bit further from a pill: if we could trust AI to put whole of the humanity into matrix like state, and keep the humanity alive in that state longer than humanity itself could survive living in real world, while running a simulation of life with maximum happiness in each brain until it ran out of energy, would you advocate it? I know I would, and I don't really see any reason not to.

I was going to say something about moral progress being changes in society that result in global increase in happiness, but I ran into some problems pretty fast following that thought. Hell, if we could poll every single living being from 11th century and 21st century and ask them to rate their happiness from 1-10 why do I have a feeling we'd end up with same average in both cases?

If you gave me exensional definition of moral progress by listing free speech, end of slavery and democracy, and then ask me for intensional definition, I'd say moral progress is global and local increase in co-operation between humans. That does not necessarily mean increase in global happiness.

So you are saying that statement "0 and 1 are not probabilities" has probability of 1?

I suppose I could talk about how I've never had any other proof about existence of photons except what I've read in books and I've been told by teachers. How I am in fact taking science on authority rather than first hand experimental proof. Sure, scientific method and all the explanations sound like they make sense, but is that enough to accept them as facts? Or should I lower my probabilities until I actually find out myself?

Whatever particular small problem you choose, then ask yourself how you compare small-problem-to-lots-of-people with large-problem-to-fewer-people. If disutilities add across people, then for some number of people I arrive at the counterintuitive conclusion that 50 years of torture to one person is preferable to small-problem-to-lots-of-people.

It is counterintuitive, and at least for me it's REALLY counterintuitive. On wether to save 400 people or 500 people with 90% change it didn't take me many seconds to choose second option, but this feels very different. Now that you put it in terms of unit of disutily isntead of dust specks it is easier to think about, and on some level it does feel like torture of one person would be the logical choice. And then part of my mind starts screamin this is wrong.

Thanks for your reply though, I'll have to think about all this.

You are making assumption the feeling caused by having dust spec in your eye is in same category as feeling of being tortured for 50 years.

Would you rather have googolplex people drink a glass of water or have one person tortured for 50 years? Would you rather have googolplex people put on their underwear in the morning or have one person tortured for 50 years? If you put feeling of dust spec in same category as feelings arising from 50 years of torture, you can put pretty much anything in that category and you end up preferring one person being tortured for 50 years to to almost any physical phenomena that would happen to googolplex people.

And even if it's in same category? I bet that just having a thought causes some extremely small activity in brain areas related to pain. Multiply that by large enough number and the total pain value will be greater than pain value of person being tortured for 50 years! I would hope that there is no one who would prefer one person being tortured for 50 years to 3^^^3 persons having a thought...