Assuming the possibility of all of the following: what would happen if every person had a superintelligent AI with a utility function of that person's idealized extrapolated utility function?
One crazy nihilist with a destructive utility function would ruin the whole thing, by building a nuke or something. Offense wins decisively over defense.
Is it likely there would be a cooperative equilibrium among unmerged AIs?
Only if they were filtered to add restrictions or remove certain types of utility functions. And probably not even then, since AIs with evil utility functions could crop up randomly in that environment, from botched self-modifications or damage.
How would that compare to a scenario with a single AI embodying a successful calculation of CEV?
A single AI would be much better, since it could resolve all prisoners' dilemmas, coordination games, and ultimatum games in a way that's optimal, rather than merely pareto efficient.
Is it possible to create multiple AIs such that one AI does not prevent others from being created, such as by releasing equally powerful AIs simultaneously?
Releasing equally powerful AIs simultaneously is very risky, because it gives them an incentive to rush their self-improvements through, rather than take their time to check them for errors. Also, one of the AIs would probably succeed in destroying the others; cybersecurity so far has been a decisive win for offense.
What would be different if a person or some few people did not have a superintelligence valuing what they would value, and only many people had their own AI?
Most peoples' utility functions include some empathy, which would cover for many people being excluded from counting directly. However, if a person doesn't have a superintelligence valuing what they would value, then some of their values will be excluded if no one else approves of them. This is mostly a good thing, since the values that would be excluded this way would probably be destructive ones. However, people who were not included directly would lose out in any contentions over scarce resources, which could turn into a serious problem for them if resources become scarce.
One crazy nihilist
A more convenient possible world was alluded to when I asked about excluding some individuals.
equilibrium
Only if
No merging?
A single AI would be much better
Maybe, but I had also asked about the relative difficulty of calculating CEV and DEV. If DEV is easier, perhaps possible rather than impossible, that's an advantage of it.
one of the AIs would probably succeed in destroying the others; cybersecurity so far has been a decisive win for offense.
War is a risk, it includes the possibility of mutual destruction, particularly ...
Questions for discussion, with my tentative answers. Assuming I am wrong about some things, there is something interesting to consider. This is inspired by the recent SL4-type and CEV-centric topics in the discussion section.
Questions:
I
II
III
My Answers:
I
II
III