I'm not saying it's literally impossible
1/3^^^3 is so unfathomably huge, you might as well be saying it's literally impossible. I don't think humans are confident enough to assign probabilities so low, ever.
10^100 humans, on the other hand, are off scale. They would require a physical theory very different than ours. Hence we should assign to it a vanishingly small probability.
I think EY had the best counter argument. He had a fictional scenario where a physicist proposed a new theory that was simple and fit all of our data perfectly. But the theory also implies a new law of physics that could be exploited for computing power, and would allow unfathomably large amounts of computing power. And that computing power could be used to create simulated humans.
Therefore, if it's true, anyone alive today has a small probability of affecting large amounts of simulated people. Since that has "vanishingly small probability", the theory must be wrong. It doesn't matter if it's simple or if it fits the data perfectly.
But it seems like a theory that is simple and fits all the data should be very likely. And it seems like all agents with the same knowledge, should have the same beliefs about reality. Reality is totally uncaring about what our values are. What is true is already so. We should try to model it as accurately as possible. Not refuse to believe things because we don't like the consequences. That's actually a logical fallacy.
1/3^^^3 is so unfathomably huge, you might as well be saying it's literally impossible. I don't think humans are confident enough to assign probabilities so low, ever.
Same thing with numbers like 10^100 or 3^^^3.
...I think EY had the best counter argument. He had a fictional scenario where a physicist proposed a new theory that was simple and fit all of our data perfectly. But the theory also implies a new law of physics that could be exploited for computing power, and would allow unfathomably large amounts of computing power. And that computing power co
Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.