TheOtherDave comments on Objections to Coherent Extrapolated Volition - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
It doesn't have to turn itself off, it just has to stop taking requests.
Come to think of it, it doesn't even have to do that. If I were such an oracle and the world were as you describe it, I might well establish the policy that before I solve problem A on humanity's behalf, I require that humanity solve problem B on their own behalf.
Sure, B is an artificially created problem, but it's artificially created by me, and humanity has no choice in the matter.
Or it could even focus on the most pressing problems, and leave stuff around the margins for us to work on. Just because it has vast resources doesn't mean it has infinite resources.