I am here to propose to you today that we should not balance the risks and opportunities of advanced artificial intelligence. We should welcome the risks and remain blind to opportunities. We should needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan stupidly and irrationally. We should act in fear and panic, and give in to technophobia; alternatively, we should act in blind enthusiasm. We should only respect the interests of some parties with a stake in the Singularity. We must try to ensure that the benefits of advanced...
I don't know, it seems as though Wired magazine understands my hopes for the future pretty well. Where is the scary part?