Personal vs Global CEV could also be mentioned here.
Upon reading the ideal advisor theories paper an idea came to mind about how to protect CEV from Sobel's fourth objection where the ideal adviser recommends actions that would lead to death because it knows that its original self would want to commit suicide after seeing how inferior and hopeless their life is compared to a perfect self. If we limit the "better version of ourselves" to only have superior knowledge and skills and nothing that we couldn't obtain if we had enough time and resources, then it wouldn't view us as disabled or hopeless, only misinformed. Hence there would be a way out and the perfectly informed self would also know all the ways to improve the situation. So it wouldn't recommend mercy death, unless the original self already had suicidal tendencies. What a nice topic to discuss =P
Even so, while the outputs are still abstract and not-yet-computed, Alice doesn't have much of a place to stand on which to appeal to Carol, Dennis, and Evelyn by saying, "But as a matter of morality and justice, you should have the AI implement my extrapolated volition, not Bob's!"
They may not have a moral argument, but they can surely have an argument.
Alice claims that they should democratically assign equal weight to each currently living person.
Bob claims that they should assign equal weight to all creatures which can plausibly be extrapolated.
Carol (who is rich) claims that they should assign weight based on current influence in the world.
Dennis (who is old-fashioned) claims that they should assign equal weight to all humans who have ever lived.
Evelyn (who has many children) claims that they should assign weight to the people who would have existed in future generations.
And so on, this is a tiny fraction of the plausible alternatives. I don't really think that any is a strong Schelling point, and certainly none is so strong that you can't argue for one of the others.
You say that the purpose of not being a jerk is so that people can cooperate, rather than turning the development of AI into a conflict. If that's your goal, wouldn't the default approach be to give each individual enough influence to ensure that they have no incentive to defect? If you try to assign weight democratically, you are massively reducing the influence of many particular individuals, including almost every researcher, investor, and regulator. That does not seem like the most natural recipe for eliminating conflict!
As another way of putting it, suppose that I was to be made dictator of the world tomorrow. What should I do, if I wanted to not be a jerk? One proposal is to redistribute all resources equally amongst living humans. Another is to do nothing. People will justifiably object to both, I don't think there is a simple story about which is right (setting aside pragmatic concerns about feasibility).
You can try to get out of this, by claiming that the pie is going to grow so much that this kind of conflict is a non-issue. I think that's true to the extent that people just want to live happy, normal lives. But many people have preferences over what happens in the world, not only about their own lives. From an aggregative altruistic perspective these are the preferences that are really important, and they are almost necessarily in tension since realizing any of them demands some resources.
Personal vs Global CEV could also be mentioned here.
Upon reading the ideal advisor theories paper an idea came to mind about how to protect CEV from Sobel's fourth objection where the ideal adviser recommends actions that would lead to death because it knows that its original self would want to commit suicide after seeing how inferior and hopeless their life is compared to a perfect self. If we limit the "better version of ourselves" to only have superior knowledge and skills and nothing that we couldn't obtain if we had enough time and resources, then it wouldn't view us as disabled or hopeless, only misinformed. Hence there would be a way out and the perfectly informed self would also know all the ways to improve the situation. So it wouldn't recommend mercy death, unless the original self already had suicidal tendencies. What a nice topic to discuss =P
They may not have a moral argument, but they can surely have an argument.
And so on, this is a tiny fraction of the plausible alternatives. I don't really think that any is a strong Schelling point, and certainly none is so strong that you can't argue for one of the others.
You say that the purpose of not being a jerk is so that people can cooperate, rather than turning the development of AI into a conflict. If that's your goal, wouldn't the default approach be to give each individual enough influence to ensure that they have no incentive to defect? If you try to assign weight democratically, you are massively reducing the influence of many particular individuals, including almost every researcher, investor, and regulator. That does not seem like the most natural recipe for eliminating conflict!
As another way of putting it, suppose that I was to be made dictator of the world tomorrow. What should I do, if I wanted to not be a jerk? One proposal is to redistribute all resources equally amongst living humans. Another is to do nothing. People will justifiably object to both, I don't think there is a simple story about which is right (setting aside pragmatic concerns about feasibility).
You can try to get out of this, by claiming that the pie is going to grow so much that this kind of conflict is a non-issue. I think that's true to the extent that people just want to live happy, normal lives. But many people have preferences over what happens in the world, not only about their own lives. From an aggregative altruistic perspective these are the preferences that are really important, and they are almost necessarily in tension since realizing any of them demands some resources.
I doubt it will satisfy you, but see the added "Selfish bastards" and "Why include everyone" sections.