I understand you as saying that cosmic ray collisions that happen all the time are very similar to the sort of collisions at CERN, and since they don't cause apocalypses, CERN won't either. And that because the experiment has been tried before millions of times in the form of cosmic rays, this "CERN won't either" isn't on the order of "one in a million" or "one in a billion" but is so vanishingly small that it would be silly to even put a number to it.
Tell me if I understood you correctly and if I did I will try to rephrase my post and my objections to what you said so they are more understandable.
And that because the experiment has been tried before millions of times in the form of cosmic rays
Not millions of times. Not even just billions of times.
From a back of the envelope calculation they've been tried >10^16 times a year.
For the past 10^9 years.
That's 10^25 times
And that's probably several orders of magnitude low.
So yes, treating it as something with a non-zero probability of destroying the planet is silly.
Especially because every model I've seen that says it'd destroy the planet would also have it destroy the sun. Which has 10^4 times the surface area of the Earth, and would have correspondingly more cosmic ray collisions.
For background, see here.
In a comment on the original Pascal's mugging post, Nick Tarleton writes:
Coming across this again recently, it occurred to me that there might be a way to generalize Vassar's suggestion in such a way as to deal with Tarleton's more abstract formulation of the problem. I'm curious about the extent to which folks have thought about this. (Looking further through the comments on the original post, I found essentially the same idea in a comment by g, but it wasn't discussed further.)
The idea is that the Kolmogorov complexity of "3^^^^3 units of disutility" should be much higher than the Kolmogorov complexity of the number 3^^^^3. That is, the utility function should grow only according to the complexity of the scenario being evaluated, and not (say) linearly in the number of people involved. Furthermore, the domain of the utility function should consist of low-level descriptions of the state of the world, which won't refer directly to words uttered by muggers, in such a way that a mere discussion of "3^^^^3 units of disutility" by a mugger will not typically be (anywhere near) enough evidence to promote an actual "3^^^^3-disutilon" hypothesis to attention.
This seems to imply that the intuition responsible for the problem is a kind of fake simplicity, ignoring the complexity of value (negative value in this case). A confusion of levels also appears implicated (talking about utility does not itself significantly affect utility; you don't suddenly make 3^^^^3-disutilon scenarios probable by talking about "3^^^^3 disutilons").
What do folks think of this? Any obvious problems?