CarlShulman comments on Pascal's Mugging for bounded utility functions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (49)
"If you truly have a bounded utility function"
A bounded utility function that doesn't assign any significant marginal weight, e.g. 0.00000000001, to TREE(100), "largest number I can formulate in a year," or infinite years of fun.
It can only apply significant marginal weight for so long. Just weight until it gets to 100-epsilon utils, and then mug.
Marginal weight at infinite years is interesting. That would likely mean that, after a certain amount of fun, you just put all your resources to trying to get infinite fun.
Yes, that was my point (maybe should have been more explicit about it).
Though it's also worth pointing out that with a utility function like Carl is alluding to (where utilities are significantly different if the lifespans are noticeably different to humans), if you've lived for 3^^^3 years and the universe looks capable of supporting 2^(3^^^3) years of fun but not more than that, you will worry about the tiny probability of TREE(100) more than about realizing the 2^(3^^^3) that are actually possible.
To be more explicit: my take on this sort of thing is to smear out marginal utility across our conceptual space of such measures:
For years of life I would assign weight to at least (and more than) these regions:
I am also tempted to throw in some relative measures:
Simple conceptual space (that can be represented in a terrestrial brain) is limited, and if one cannot 'cover all the bases' one can spread oneself widely enough not to miss opportunities for easy wins when the gains "are...noticeably different to humans." And that seems pretty OK with me.
"Marginal weight at infinite years is interesting. That would likely mean that, after a certain amount of fun, you just put all your resources to trying to get infinite fun."
With these large finite numbers you exhaust all the possible brain states of humans or Jupiter-brains almost at the beginning. Then you have to cycle or scale your sensations and cognition up (which no one has suggested above), and I am not so drastically motivated to be galaxy-sized and blissfully cycling than planet-sized and blissfully cycling. Infinite life-years could be qualitatively different from the ludicrous finite lifespans in not having an end, which is a feature that I can care about.
Carl, thanks for writing this up! I may as well unpack and say that this is pretty much how I have been thinking about the problem, too (though I hadn't considered the idea of relative measures), and I still think I prefer biting the attendant bullets that I can see to the alternatives. But I do at least find it -- well -- worth pointing out that if we in fact achieve one of the higher strata, and we want to be time-consistent, it looks like we're going to stop living our lives on the mainline probability; i.e., if the universe is of size 3^^^3, it seems like we'll spend almost all of the available resources on trying to crack the matrix (even if there is no indication that we live in a matrix) and only an infinitesimal -- combinatorially small -- fraction on actually having fun.
Yes, I do think that this is probably what I will on reflection find to be the right thing, because the combinatorially small fraction pretty much looks like 3^^^3 from my current vantage point and even my middle-distance extrapolations, and as we self-modify to grow larger, since we want to be time-consistent and not regret being time-consistent, we'll design our future selves such that we'll keep feeling that this is the right tradeoff (i.e., this is much better than starting out with a near-certainty of not having fun at all, because our FAI puts all resources into trying to find infinite laws of physics). So perhaps it is simply appropriate (to humanity's utility function) that immense brains spend most of their resources guarding against events of infinitesimal probabilities. But it's sufficiently non-obvious that it at least seems worth keeping in mind.
(Also, amended the post with a note that by "4^^^^4", I really mean "whatever is so large that it is only epsilon away from the upper bound".)
Indeed.
These strike me as basically the same thing relative to my imagination. The biggest numbers mathematicians can describe using the fast-growing hierarchy for the largest computable ordinals are already too gigantic to... well... they're already too gigantic. Taking the Ackermann function as primitive, I still can't visualize the Goodstein sequence of 16, never mind 17, and I think that's somewhere around w^(w^2) in the fast-growing hierarchy.
The jump to uncomputable numbers / numbers that are unique models of second-order axioms would still be a large further jump, though.