The situation described in Pascal's mugging is OOD (out-of-distribution) for human values. Human values have not been trained/tested on scenarios with tiny probabilites of vast utilities.
What answer does a system that goes OOD give us? It doesn't matter, we are not supposed to use a system in OOD context.
Naively extrapolating human values too far is not permitted.
Giving an arbitrary/random answer is not permitted.
But we need to make some sort of decision, and we nothing but our values to guide us.
But out values are not defined for the decision we are trying to make.
And we are not allowed to define our values arbitrarily.
I think the answer is really complex, and involves something like "taking all our values and meta-values in account, what is the least arbitrary way we can extend our value system into the space in which we are trying to make a decision"
So, my answer to Pascal's mugging is: human values are probably not yet ready to answer questions like that, at least not in a consistent manner.
The situation described in Pascal's mugging is OOD (out-of-distribution) for human values. Human values have not been trained/tested on scenarios with tiny probabilites of vast utilities.
What answer does a system that goes OOD give us? It doesn't matter, we are not supposed to use a system in OOD context.
Naively extrapolating human values too far is not permitted.
Giving an arbitrary/random answer is not permitted.
But we need to make some sort of decision, and we nothing but our values to guide us.
But out values are not defined for the decision we are trying to make.
And we are not allowed to define our values arbitrarily.
I think the answer is really complex, and involves something like "taking all our values and meta-values in account, what is the least arbitrary way we can extend our value system into the space in which we are trying to make a decision"
So, my answer to Pascal's mugging is: human values are probably not yet ready to answer questions like that, at least not in a consistent manner.
Hmm. You are absolutely right, I didn't think of all these examples.
Let me rephrase:
I think probabilitites on the order of 1/(3^^^3) are OOD for expected utility calculations.
We mostly don't care about expected utility for probabilities that small.
Pascal's mugging is bucketed either into "this is a scam" or "lottery ticket" by human values. And that is fine, unless this results in a contradiction with some of our other values. But I don't think it does.
Extremely high utility yes, extremely low probability no. Usually the idea is that you... (read more)