Questions for discussion, with my tentative answers. Assuming I am wrong about some things, there is something interesting to consider. This is inspired by the recent SL4-type and CEV-centric topics in the discussion section.
Questions:
I
- Is it easier to calculate the extrapolated volition of an individual or a group?
- If it is easier to do for an individual, is it because it is strictly simpler to do it, in that calculating humanity's CEV involves making at least every calculation that would be made for calculating the extrapolated volition of one individual?
- How definitively can these questions be answered without knowing exactly how to calculate CEV?
II
- Is it possible to create multiple AIs such that one AI does not prevent others from being created, such as by releasing equally powerful AIs simultaneously?
- Is it possible to box AIs such that they reliably never escape before a certain, if short, period of time, such as by giving them a low-cost way out with a calculable minimum and maximum time to exploit that route?
- Is it likely there would be a cooperative equilibrium among unmerged AIs?
III
- Assuming the possibility of all of the following: what would happen if every person had a superintelligent AI with a utility function of that person's idealized extrapolated utility function?
- How would that compare to a scenario with a single AI embodying a successful calculation of CEV?
- What would be different if a person or some few people did not have a superintelligence valuing what they would value, and only many people had their own AI?
My Answers:
I
- It depends on the error level tolerated. If only very low error is tolerated, it is easier to do it for a group.
- N/A
- Not sure.
II
- Probably not.
- Maybe, probably not, but impossible to know with high confidence.
- Probably not. Throughout history, offense has often been a step ahead of defense, which often catches up to it. I think this is not particular to evolutionary biology or the technologies that happen to have been developed. It seems easier to break complicated things with many moving parts than to build and defend them. Also, specific technologies people plausibly speculate may exist are more powerful offensively than defensively. I would expect them to merge, probably peacefully.
III
- Hard to say, as that would be trying to predict the actions of more intelligent beings in a dynamic environment.
- It might be better, or worse. The chance of it being similar is notably high.
- Not sure.
From http://singinst.org/upload/CEV.html, I added some emphasis to explain why I understand it the way I do.
So coherence is something done after un-muddling.