Manfred comments on The ethics of randomized computation in the multiverse - Less Wrong

8 Post author: lukeprog 22 November 2011 04:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 22 November 2011 10:06:35PM 0 points [-]

More specifically, I'm pretty sure us humans don't have any negative parts of our utility function that grow exponentially with "badness," so there's no bad outcome that can overcome the exponential decrease in probability with program size to actually be a significant factor.

Comment author: hairyfigment 28 November 2011 07:04:42AM 0 points [-]

Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds? (Or have I missed something?)

It seems to this layman that using quantum randomization would give us no increase or a tiny increase in utility per world, relative to overwriting each bit with 0 or a piece of Loren Ipsum. And as with Dust Specks, if we actually know we might have prevented torture then I'd get a warm feeling which should count towards the total.

Comment author: Manfred 28 November 2011 07:32:08AM *  0 points [-]

Are you going with Torture v Dust Specks here? Or do you just reject Many Worlds?

Neither is relevant in this case. My claim is that it's not worth spending even a second of time, even a teensy bit of thought, on changing which kind of randomization you use.

Why? Exponential functions drop off really, really quickly. Really quickly. The proportion of of random bit strings that, when booted up, are minds in horrible agony drops roughly as the exponential of the complexity of the idea "minds in horrible agony." It would look approximately like 2^-(complexity).

To turn this exponentially small chance into something I'd care about, we'd need the consequence to be of exponential magnitude. But it's not. It's just a regular number like 1 billion dollars or so. That's 2^30. It's nothing. You aren't going to write a computer program that detects minds in horrible agony using 30 bits. You aren't going to write one with 500 bits, either (concentration of one part in 10^-151). It's simply not worth worrying about things that are worth less than 10^-140 cents.

Comment author: hairyfigment 28 November 2011 07:54:04AM 0 points [-]

I'm saying I don't understand what you're measuring. Does a world with a suffering simulation exist, given the OP's scenario, or not?

If it does, then the proliferation of other worlds doesn't matter unless they contain something that might offset the pain. If they're morally neutral they can number Aleph-1 and it won't make any difference.

Comment author: Manfred 28 November 2011 09:35:10AM *  0 points [-]

Decision-making in many-worlds is exactly identical to ordinary decision-making. You weight the utility of possible outcomes by their measure, and add them up into an expected utility. The bad stuff in one of those outcomes only feels more important when you phrase it in terms of many-worlds, because a certainty of small bad stuff often feels worse than a chance of big bad stuff, even when the expected utility is the same.