VAuroch comments on Probabilities Small Enough To Ignore: An attack on Pascal's Mugging - Less Wrong

20 Post author: Kaj_Sotala 16 September 2015 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (176)

You are viewing a single comment's thread. Show more comments above.

Comment author: VAuroch 16 September 2015 07:07:41PM 1 point [-]

It's easy if they have access to running detailed simulations, and while the probability that someone secretly has that ability is very low, it's not nearly as low as the probabilities Kaj mentioned here.

Comment author: IlyaShpitser 16 September 2015 08:33:19PM *  1 point [-]

It is? How much energy are you going to need to run detailed sims of 10^100 people?

Comment author: Houshalter 17 September 2015 03:44:04AM 2 points [-]

How do you know you don't exist in the matrix? And that the true universe above ours doesn't have infinite computing power (or huge but bounded, if you don't believe in infinity.) How do you know the true laws of physics in our own universe don't allow such possibilities?

You can say these things are unlikely. That's literally specified in the problem. That doesn't resolve the paradox at all though.

Comment author: IlyaShpitser 17 September 2015 03:49:04AM 0 points [-]

I don't know, but my heuristic says to ignore stories that violate sensible physics I know about.

Comment author: Houshalter 17 September 2015 05:27:05AM 2 points [-]

That's fine. You can just follow your intuition, and that usually won't lead you too wrong. Usually. However the issue here is programming an AI which doesn't share our intuitions. We need to actually formalize our intuitions to get it to behave as we would.

Comment author: IlyaShpitser 17 September 2015 01:30:04PM 1 point [-]

What criterion do you use to rule out solutions?

Comment author: V_V 18 September 2015 01:36:57PM *  -1 points [-]

If you assume that the probability of somebody creating X lives decreases asymptotically as exp(-X) then you will not accept the deal. In fact, the larger the number they say, the less the expected utility you'll estimate (assuming that your utility is linear in the number of lives).

It seems to me that such epistemic models are natural. Pascal's Mugging arises as a thought experiment only if you consider arbitrary probability distributions and arbitrary utility functions, which in fact may even cause the expectations to become undefined in the general case.

Comment author: Houshalter 19 September 2015 03:16:31AM *  0 points [-]

If you assume that the probability of somebody creating X lives decreases asymptotically as exp(-X) then you will not accept the deal.

I don't assume this. And I don't see any reason why I should assume this. It's quite possible that there exist powerful ways of simulating large numbers of humans. I don't think it's likely, but it's not literally impossible like you are suggesting.

Maybe it even is likely. I mean the universe seems quite large. We could theoretically colonize it and make trillions of humans. By your logic, that is incredibly improbable. For no other reason than that it involves a large number. Not that there is any physical law that suggests we can't colonize the universe.

Comment author: V_V 19 September 2015 11:17:06AM -1 points [-]

I don't think it's likely, but it's not literally impossible like you are suggesting.

I'm not saying it's literally impossible, I'm saying that its probability should decrease with the number of humans, faster than the number of humans.

Maybe it even is likely. I mean the universe seems quite large. We could theoretically colonize it and make trillions of humans. By your logic, that is incredibly improbable. For no other reason than that it involves a large number.

Not really. I said "asymptotically". I was considering the tails of the distribution.
We can observe our universe and deduce the typical scale of the stuff in it. Trillion of humans may not be very likely but they don't appear to be physically impossible in our universe. 10^100 humans, on the other hand, are off scale. They would require a physical theory very different than ours. Hence we should assign to it a vanishingly small probability.

Comment author: Houshalter 19 September 2015 12:58:27PM 0 points [-]

I'm not saying it's literally impossible

1/3^^^3 is so unfathomably huge, you might as well be saying it's literally impossible. I don't think humans are confident enough to assign probabilities so low, ever.

10^100 humans, on the other hand, are off scale. They would require a physical theory very different than ours. Hence we should assign to it a vanishingly small probability.

I think EY had the best counter argument. He had a fictional scenario where a physicist proposed a new theory that was simple and fit all of our data perfectly. But the theory also implies a new law of physics that could be exploited for computing power, and would allow unfathomably large amounts of computing power. And that computing power could be used to create simulated humans.

Therefore, if it's true, anyone alive today has a small probability of affecting large amounts of simulated people. Since that has "vanishingly small probability", the theory must be wrong. It doesn't matter if it's simple or if it fits the data perfectly.

But it seems like a theory that is simple and fits all the data should be very likely. And it seems like all agents with the same knowledge, should have the same beliefs about reality. Reality is totally uncaring about what our values are. What is true is already so. We should try to model it as accurately as possible. Not refuse to believe things because we don't like the consequences. That's actually a logical fallacy.

Comment author: V_V 19 September 2015 02:55:43PM *  -1 points [-]

1/3^^^3 is so unfathomably huge, you might as well be saying it's literally impossible. I don't think humans are confident enough to assign probabilities so low, ever.

Same thing with numbers like 10^100 or 3^^^3.

I think EY had the best counter argument. He had a fictional scenario where a physicist proposed a new theory that was simple and fit all of our data perfectly. But the theory also implies a new law of physics that could be exploited for computing power, and would allow unfathomably large amounts of computing power. And that computing power could be used to create simulated humans.

EY can imagine all the fictional scenario he wants, this doesn't mean that we should assign non-negligible probabilities to them.

It doesn't matter if it's simple or if it fits the data perfectly.

If.

But it seems like a theory that is simple and fits all the data should be very likely. And it seems like all agents with the same knowledge, should have the same beliefs about reality. Reality is totally uncaring about what our values are. What is true is already so. We should try to model it as accurately as possible. Not refuse to believe things because we don't like the consequences.

If your epistemic model generates undefined expectations when you combine it with your utility function, then I'm pretty sure we can say that at least one of them is broken.

EDIT:

To expand: just because we can imagine something and give it a short English description, it doesn't mean that it is simple in epistemical terms. That's the reason why "God" is not a simple hypothesis.

Comment author: Houshalter 20 September 2015 02:25:42AM -1 points [-]

EY can imagine all the fictional scenario he wants, this doesn't mean that we should assign non-negligible probabilities to them.

Not negligible, zero. You literally can not believe in an theory of physics that allows large amounts of computing power. If we discover that an existing theory like quantum physics allows us to create large computers, we will be forced to abandon it.

If your epistemic model generates undefined expectations when you combine it with your utility function, then I'm pretty sure we can say that at least one of them is broken.

Yes something is broken, but it's definitely not our prior probabilities. Something like solomonoff induction should generate perfectly sensible predictions about the world. If knowing those predictions makes you do weird things, that's a problem with your decision procedure. Not the probability function.

Comment author: V_V 20 September 2015 09:29:27AM 0 points [-]

Not negligible, zero.

You seem to have a problem with very small probabilities but not with very large numbers. I've also noticed this in Scott Alexander and others. If very small probabilities are zeros, then very large numbers are infinities.

You literally can not believe in an theory of physics that allows large amounts of computing power. If we discover that an existing theory like quantum physics allows us to create large computers, we will be forced to abandon it.

Sure. But since we know no such theory, there is no a priori reason to assume it exists with non-negligible probability.

Something like Solomonoff induction should generate perfectly sensible predictions about the world.

Nope, it doesn't. If you apply Solomonoff induction to predict arbitrary integers, you get undefined expectations.

Comment author: VAuroch 17 September 2015 01:36:22AM 0 points [-]

Point, but not a hard one to get around.

There is a theoretical lower bound on energy per computation, but it's extremely small, and the timescale they'll be run in isn't specified. Also, unless Scott Aaronson's speculative consciousness-requires-quantum-entanglement-decoherence theory of identity is true, there are ways to use reversible computing to get around the lower bounds and achieve theoretically limitless computation as long as you don't need it to output results. Having that be extant adds improbability, but not much on the scale we're talking about.