My default answer to that is "all people alive at the time that the singularity occurs", although you pointed out a possible drawback to that (it incentivizes people to create more people with values similar to their own) in our previous discussion.
And also incentivizes people to kill people with values dissimilar to their own!
I don't think it would be terribly problematic. "People in the future should get exactly what we currently would want them to get if we were perfectly wise and knew their values and circumstances" seems like a pretty good rule. It is, after all, what we want.
Fair enough. Hmm.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Only a VNM-rational agent can have preferences in a coherent way, so if we're talking about aggregating people's preferences, I don't see any way to do it other than modeling people as having underlying VNM-rational preferences that fail to perfectly determine their decisions.
Non-VNM agents satisfying only axiom 1 have coherent preferences... they just don't mix well with probabilities.