Giles comments on Objections to Coherent Extrapolated Volition - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
I agree that if this were to happen, it seems like a bad thing (I'll call this the "keeping it real" preference). But it seems like the point when this happens is the point where humanity has the opportunity to create a value-optimizing singleton, not the point where it actually creates one.
In other words, if we could have built an FAI to solve all of our problems for us but didn't, then any remaining problems are in a sense "artificial".
But they seem less artificial in that case. And if there is a continuum of possibilities between "no FAI" and "FAI that immediately solves all our problems for us", then the FAI may be able to strike a happy balance between "solving problems" and "keeping it real".
That said, I'm not sure how well CEV addresses this. I guess it would treat "keeping it real" to be a human preference and try and satisfy it along with everything else. But it may be that if it even gets to that stage, the ability to "keep it real" has been permanently lost.