For background, see here.
In a comment on the original Pascal's mugging post, Nick Tarleton writes:
[Y]ou could replace "kill 3^^^^3 people" with "create 3^^^^3 units of disutility according to your utility function". (I respectfully suggest that we all start using this form of the problem.)
Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it's impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.
Coming across this again recently, it occurred to me that there might be a way to generalize Vassar's suggestion in such a way as to deal with Tarleton's more abstract formulation of the problem. I'm curious about the extent to which folks have thought about this. (Looking further through the comments on the original post, I found essentially the same idea in a comment by g, but it wasn't discussed further.)
The idea is that the Kolmogorov complexity of "3^^^^3 units of disutility" should be much higher than the Kolmogorov complexity of the number 3^^^^3. That is, the utility function should grow only according to the complexity of the scenario being evaluated, and not (say) linearly in the number of people involved. Furthermore, the domain of the utility function should consist of low-level descriptions of the state of the world, which won't refer directly to words uttered by muggers, in such a way that a mere discussion of "3^^^^3 units of disutility" by a mugger will not typically be (anywhere near) enough evidence to promote an actual "3^^^^3-disutilon" hypothesis to attention.
This seems to imply that the intuition responsible for the problem is a kind of fake simplicity, ignoring the complexity of value (negative value in this case). A confusion of levels also appears implicated (talking about utility does not itself significantly affect utility; you don't suddenly make 3^^^^3-disutilon scenarios probable by talking about "3^^^^3 disutilons").
What do folks think of this? Any obvious problems?
And here I was expecting you to actually run the numbers.
I'm not a particle physicist, but I do know quite a bit more about the actual numbers to start a calculation from than you do, because I bothered finding them out, and your citation so far appears to be someone else who didn't bother finding them out. This is what I mean by "reasoning from ignorance" and "even very slight domain knowledge".
You did run your numbers assuming events the LHC maximum and greater happen all the time, right?
The probability of the sun not coming up tomorrow is greater than 0, but in any practical sense I'd be a drooling lackwit to waste time calculating it.
I appreciate you're offering a teachable moment about probability, but you really, really aren't saying anything useful or sensible about the LHC, as you claimed to be.
So far the only number introduced here has been Rees' "one in fifty million". You've consistently avoided giving a number, using only the "but there's still a chance" thing, which in my interpretation you're using diametrically against its intended meaning (intended meaning is that you can't just use a binary "there is a chance" versus "there's not a chance&qu... (read more)