TheOtherDave comments on Reply to Holden on The Singularity Institute - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (213)
Lately I've been wondering whether it would make more sense to simply try to prevent the development of AGI rather than work to make it "friendly," at least for the foreseeable future. My thought is that AGI carries substantial existential risks, developing other innovations first might reduce those risks. and anything we can do to bring about such reductions is worth even enormous costs. In other words, if it takes ten thousand years to develop social or other innovations that would reduce the risk of terminal catastrophe by even 1% when AGI is finally developed, then that is well worth the delay.
Bostrom has mentioned surveillance, information restriction, and global coordination as ways of reducing risk (and I will add space exploration to make SIRCS), so why not focus on those right now instead of AGI? The same logic goes for advanced nanotechnology and biotechnology. Why develop any of these risky bio- and nanotechnologies before SIRCS? Do we think that effort spent trying to inhibit the development of AGI/bio/nano would be wasted because they are inevitable or at least so difficult to derail that "friendly" AI is our best shot? Where then has a detailed argument been made for this? Can someone point me to it? Or maybe we think SIRCS (especially surveillance) cannot be adequately developed without AGI/bio/nano? But surely global coordination and information restriction do not depend much on technology, so even without the surveillance and with limited space exploration, it still makes sense to further the others as much as possible before finally proceeding with AGI/bio/nano.
Do you see any reason to believe this argument wasn't equally sound (albeit with different scary technologies) thirty years ago, or a hundred?
Thirty years ago it may have still been valid although difficult to make since nobody knew about the risks of AGI or self-replicating assemblers. A hundred years ago it would not have been valid in this form since we lacked surveillance and space exploration technologies.
Keep in mind that we have a certain bias on this question since we happen to have survived up until this point in history but there is no guarantee of that in the future.