DSimon comments on Objections to Coherent Extrapolated Volition - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (56)
It would have to turn itself off to fix the problem I am worried about. The problem is the existence of an oracle. The problem is that the first ultraintelligent machine is the last invention that man need ever make.
To fix that problem we would have to turn ourselves into superintelligences rather than creating a singleton. As long as there is a singleton that does everything that we (humanity) want, as long as we are inferior to it, all possible problems are artificially created problems that we have chosen to solve the slow way.
I am basically alright with that, considering that "artificial problems" would still include social challenge. Much of art and sport and games and competition and the other enjoyable aspects of multi-ape-systems would probably still go on in some form; certainly as I understand the choices available to me I would definitely prefer that they do.
Yes, absolutely I can.
Right now, we write and read lots and lots of fiction about times from the past that we would not like to live in. Or, about variations on those periods with some extra fun stuff (i.e. magic spells, fire-breathing dragons, benevolent non-figurehead monarchies) that are nonetheless not safe or comfortable places to live. It can be very entertaining and engaging to read about worlds that we ourselves would not want to be stuck in, such as a historical-fantasy novel about the year 2000 when people could still die of natural causes, despite having flying broomsticks.