hairyfigment comments on Objections to Coherent Extrapolated Volition - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
I've told you before this seems like a false dichotomy. Did you give a counterargument somewhere that I missed?
Seems to me the situation has an obvious parallel in the world today. And since children would presumably still exist when we start the process, they can serve as more than a metaphor. Now children sometimes want to avoid growing up, but I don't know of any such case we can't explain as simple fear of death. That certainly suffices for my own past behavior. And you assume we've fixed death through CEV.
It therefore seems like you're assuming that we'd desire to stifle our children's curiosity and their desire to grow, rather than letting them become as smart as the FAI and perhaps dragging us along with them. Either that or you have some unstated objection to super-intelligence as a concrete ideal for our future selves to aspire to.
They can be afraid of having to deal with adult responsibilities, or the physical symptoms of aging after they've reached their prime.