Rees estimated the probability of the LHC destroying the world at 1 in 50 million, and it would be surprising if he were one of the few people in the world without overconfidence bias, or one of the few people in the world who doesn't underestimate global existential risks.
I assume from the first sentence that you believe an appropriate probability to have for the LHC destroying the world is less than one in a billion. Trusting anyone, even the world scientific consensus, with one in a billion probability, seems excessive to me - the world scientific consensus has been wrong on more than one in every billion issues it thinks it's sure about. If you're working not off the world scientific consensus but off your own intuition, that seems even stranger - if, for example, the LHC will destroy the world if and only if strangelets are stable at 10 TeEV, then you just discovered important properties about the stability of strangelets to p = < .000000001 certainty, which seems like the sort of thing you shouldn't be able to do without any experiments or mathematics. If you're working off of a general tendency for the world not to be destroyed, well, there were five mass extinction events in the past billion years, so ignoring for the moment the tendency of mass extinctions to take multiple years, that means the probability of a mass extinction beginning in any particular year is about 5/billion. If I were to tell you "The human race will become extinct the year the LHC is switched on", would you really tell me "Greater than 80% chance it has nothing to do with the LHC" and go about your business?
I am still uncomfortable with the whole "shut up and multiply" concept too. But I think that's where the "shut up" part comes in. You don't have to be comfortable with it. You don't have to like it. But if the math checks out, you just shut up and keep your discomfort to yourself, because math is math and bad things happen when you ignore it.
Rees...one of the few people in the world who doesn't underestimate global existential risks
He assigned 50% extinction risk for the 21st century in his book. His overall estimates of risk are quite high.
For background, see here.
In a comment on the original Pascal's mugging post, Nick Tarleton writes:
Coming across this again recently, it occurred to me that there might be a way to generalize Vassar's suggestion in such a way as to deal with Tarleton's more abstract formulation of the problem. I'm curious about the extent to which folks have thought about this. (Looking further through the comments on the original post, I found essentially the same idea in a comment by g, but it wasn't discussed further.)
The idea is that the Kolmogorov complexity of "3^^^^3 units of disutility" should be much higher than the Kolmogorov complexity of the number 3^^^^3. That is, the utility function should grow only according to the complexity of the scenario being evaluated, and not (say) linearly in the number of people involved. Furthermore, the domain of the utility function should consist of low-level descriptions of the state of the world, which won't refer directly to words uttered by muggers, in such a way that a mere discussion of "3^^^^3 units of disutility" by a mugger will not typically be (anywhere near) enough evidence to promote an actual "3^^^^3-disutilon" hypothesis to attention.
This seems to imply that the intuition responsible for the problem is a kind of fake simplicity, ignoring the complexity of value (negative value in this case). A confusion of levels also appears implicated (talking about utility does not itself significantly affect utility; you don't suddenly make 3^^^^3-disutilon scenarios probable by talking about "3^^^^3 disutilons").
What do folks think of this? Any obvious problems?