For background, see here.
In a comment on the original Pascal's mugging post, Nick Tarleton writes:
[Y]ou could replace "kill 3^^^^3 people" with "create 3^^^^3 units of disutility according to your utility function". (I respectfully suggest that we all start using this form of the problem.)
Michael Vassar has suggested that we should consider any number of identical lives to have the same utility as one life. That could be a solution, as it's impossible to create 3^^^^3 distinct humans. But, this also is irrelevant to the create-3^^^^3-disutility-units form.
Coming across this again recently, it occurred to me that there might be a way to generalize Vassar's suggestion in such a way as to deal with Tarleton's more abstract formulation of the problem. I'm curious about the extent to which folks have thought about this. (Looking further through the comments on the original post, I found essentially the same idea in a comment by g, but it wasn't discussed further.)
The idea is that the Kolmogorov complexity of "3^^^^3 units of disutility" should be much higher than the Kolmogorov complexity of the number 3^^^^3. That is, the utility function should grow only according to the complexity of the scenario being evaluated, and not (say) linearly in the number of people involved. Furthermore, the domain of the utility function should consist of low-level descriptions of the state of the world, which won't refer directly to words uttered by muggers, in such a way that a mere discussion of "3^^^^3 units of disutility" by a mugger will not typically be (anywhere near) enough evidence to promote an actual "3^^^^3-disutilon" hypothesis to attention.
This seems to imply that the intuition responsible for the problem is a kind of fake simplicity, ignoring the complexity of value (negative value in this case). A confusion of levels also appears implicated (talking about utility does not itself significantly affect utility; you don't suddenly make 3^^^^3-disutilon scenarios probable by talking about "3^^^^3 disutilons").
What do folks think of this? Any obvious problems?
I was kinda hoping you wouldn't ask that. This whole thing came up because I said it was "reasonable" to worry about the LHC, and I stick to that. But the whole thing seems like a Pascal's Mugging to me, and I don't have a perfect answer to that class of problem.
I don't think it should be switched off now, because its failure to destroy the world so far is even better evidence than the cosmic ray argument that it won't destroy the world the next time it's used. But if you'd asked before it was turned on? I guess I would agree with Aleksei Riikonen's point in one of the other LW threads that this is really the sort of thing that could be done just as well after the Singularity.
But I also agree with Eliezer (I could have avoided this entire discussion if I'd just been able to find that post the first time I looked for it when you asked for a citation!) that in reality I wouldn't lose sleep over it. Basically, I notice I am confused, and my only objection to you was the suggestion that reasonable people couldn't worry about it, not that I have any great idea how to address the issue myself.
You mean, asking the whole actual real-life question at hand: whether the LHC is too risky to run.
"Is it reasonable to think X?" is only a useful question to consider in relation to X as part of the actual discussion of X. It's not a useful sort of question in itself until it's applied to something. Without considering the X itself, it's a question about philosophy, not about the X. If you're going to claim something about the LHC, I expect you to be saying something useful about the LHC itself.
Given y... (read more)