I think you can get high-quality 100k smartphone which will be relevant for a long time, if you make it customized specificially for you. The problem is that customization is a labor from the side of the buyer and many people lack taste to perform such labor.
Chaitin's Number of Wisdom. Knowledge looks like noise from outside.
I express this by saying "sufficiently advanced probabilistic reasoning is indistinguishable from prophetic intuition".
Another thing is narrow self-concept.
In original thread, people often write about things they have and their clone would want, like family. They fail to think about things they don't have due to having families, like cocaine orgies, or volunteering to war for just cause, or monastic life in search of enlightenment, so they could flip a coin and go pursue alternative life in 50% of cases. I suspect it's because thinking about desirable things you won't have on the best available course of your life is very sour-grapes-flavored.
When I say "human values" without reference I mean "type of things that human-like mind can want and their extrapolations". Like, blind from birth person can want their vision restored, even if they have sufficiently accommodating environment and other ways to orient, like echolocation. Able-bodied human can notice this and extrapolate this into possible new modalities of perception. You can be not vengeful person, but concept of revenge makes sense to almost any human, unlike concept of paperclip-maximization.
It's nice ideal to strive for, but sometimes you need to make judgement call based on things you can't explain.
Okay, but yumminess is not values. If we pick ML analogy, yumminess is reward signal or some other training hyperparameter.
My personal operationalization of values is "the thing that helps you to navigate trade-offs". You can have yummi feelings about saving life of your son or about saving life of ten strangers, but we can't say what you value until you consider situation where you need to choose between two. And, conversely, if you have good feelings about parties and reading books, your values direct what you choose.
Choice in case of real, value-laden trade-offs is usually defined by significant amount of reflection about values and memetic ambience supplies known summaries of such reflection in the past.
This reason only makes sense if you expect first person to develop AGI to create singleton which takes over the world and locks in pre-installed values, which, again, I find not very compatible with low p(doom). What prevents scenario "AGI developers look around for a year after creation of AGI and decide that they can do better" if not misaligned takeover and not suboptimal value lock-in?
It looks like the opposite? If 91% of progress happened due to two changes in scaling regimes, who knows what is going to happen after third.