Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Interesting. The general consensus in that thread seems to have been that the user in question was missing the point somehow, and -3 isn't really such a terribly low score for something generally thought to have been missing the point. (I guess it was actually +6 -9.)
I don't think the poor reception of "Adding up to normality" is why the user in question left LW. E.g., this post was made by the same user about 6 months later, so clearly s/he wasn't immediately driven off by the downvotes on "Adding up to normality".
Anyway. I think I agree with the general consensus in that thread (though I didn't downvote the post and still wouldn't) that the author missed the point a bit. I think Egan's law is a variant on a witticism attributed to Wittgenstein. Supposedly, he and a colleague had a conversation like this. W: Why did anyone think the sun went round the earth? C: Because it looks as if it does. W: What would it have looked like, if it had looked as if the earth went round the sun? The answer, of course, being that it would have looked just the way it actually does, because the earth does go round the sun and things look the way they do.
Similarly (and I think this is Egan's point), if you have (or the whole species has) developed some attitude to life, or some expectation about what will happen in ordinary circumstances, based on how the world looks, and if some new scientific theory that predicts that the world will look that way, then either you shouldn't change that attitude or it was actually inappropriate all along.
Now, you can always take the second branch and say things like this: "This theory shows that we should all shoot ourselves, so plainly if we'd been clever enough we'd already have deduced from everyday observation that we should all shoot ourselves. But we weren't, and it took the discovery of this theory to show us that. But now, we should all shoot ourselves." So far as I can tell, appealing to Egan's law doesn't do anything to refute that. It just says that if something is known to work well in the real world, then ipso facto our best scientific theories tell us it should work well in the world they describe, even if the way they describe that world feels weird to us.
I agree with the author when s/he writes that correct versions of Egan's law don't at all rule out the possibility that some proposition we feel attached to might in fact be ruled out by our best scientific theories, provided that proposition goes beyond merely-observational statements along the lines of "it looks as if X".
So, what about the example we're actually discussing? Your proposal, AIUI, is as follows: rig things up so that in the event of the human race getting wiped out you almost certainly get instantly annihilated before you have a chance to learn what's happening; then you will almost certainly never experience the wiping-out of the human race. You describe this by saying that you "probably survive any x-risk".
This seems all wrong to me, and I can see the appeal of expressing its wrongness in terms of "Egan's law", but I don't think that's necessary. I would just say: Are you quite sure that what this buys you is really what you care about? If so, then e.g. it seems you should be indifferent to the installation of a device at your house that at 4am every day, with probability 1/2, blows up the house in a massive explosion with you in it. After all, you will almost certainly never experience being killed by the device (the explosion is big and quick enough for that, and in any case it usually happens when you're asleep). Personally, I would very much not want such a device in my house, because I value not dying as well as not experiencing death, and also because there are other people who would be (consciously) harmed if this happened. And I think it much better terminology to describe the situation as "the device will almost certainly kill me" than as "the device will almost certainly not kill me", because when computing probabilities now I want to condition on my knowledge, existence, etc., now, not after the relevant events happen.
Am I applying "Egan's law" here? Kinda. I care about not dying because that's how my brain's built, and it was built that way by an evolutionary process formed in the actual world where a lineage isn't any better off for having its siblings in other wavefunction-branches survive; and when describing probabilities I prefer to condition only on my present epistemic state because in most contexts that leads to neater formulas and fewer mistakes; and what I'm claiming is that those things aren't invalidated by saying words like "anthropic" or "quantum". But an explicit appeal to Egan seems unnecessary. I'm just reasoning in the usual way, and waiting to be shown a specific reason why I'm wrong.
I meant that not only his post but most of his comments were downvoted, and from my personal experience if I get a lot of downvoting, I feel difficult to continue rational discussion of the topic.
Egan's law is very vague in its short formulation. It is not clear, what is "all", what kind of law is it - epistemic, natural, legal; what is normality - physics, experience, our expectation, our social agreements. So it mostly used as universal objection to any strange things.
But there are lot of strange things. Nukes were not normal before they were c... (read more)