JGWeissman comments on Guardian Angels: Discrete Extrapolated Volitions - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (9)
A singleton AI with individual CEV's for each human can do at least as well by simulating the negotiation of uniformly powerful individual AIs for each CEV. This is more stable by having the singleton's simulation enforce uniform levels of power, where actual AIs could potentially diverge in power.
I don't think "individual CEV" is proper. It's like calling an ATM an "ATM organism", which would be even worse than calling it an "ATM machine", as is common. The "C" means individual extrapolated volitions are combined coherently.
I agree it would in theory be better to have a singleton. But that requires knowing how to cohere extrapolated volitions. My idea is that it might be possible to push off that task to superintelligences without destroying the world in the process.
While it would be useful to be able to split the 'combine from different agents wishes' part from the 'act as if the agents smarter and wiser' part as it is currently described the 'C' is still necessary even for an individual. Because most organisms including, most importantly, humans do not have coherent value systems as they stand. So as it stands we need to say things like CEV<wedrifid> and CEV<humanity> for the label to make sense. The core of the problem here is that there are three important elements of the process that we are trying to represent with just two letters of the acronym.
Those three don't neatly separate into 'C' and 'E'.
From http://singinst.org/upload/CEV.html, I added some emphasis to explain why I understand it the way I do.
So coherence is something done after un-muddling.