KnaveOfAllTrades comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 January 2014 01:33:48PM *  5 points [-]

Unfriendly AI: 233, 14.2%

Nanotech/grey goo: 57, 3.5%

Could someone who voted for unfriendly AI explain how nanotech or biotech isn't much more of a risk than unfriendly AI (I'll assume MIRI's definition here)?

I ask this question because it seems to me that even given a technological singularity there should be enough time for "unfriendly humans" to use precursors to fully fledged artificial general intelligence (e.g. advanced tool AI) in order to solve nanotechnology or advanced biotech. Technologies which themselves will enable unfriendly humans to cause a number of catastrophic risks (e.g. pandemics, nanotech wars, perfect global surveillance (an eternal tyranny) etc.).

Unfriendly AI, as imagined by MIRI, seems to be the end product of a developmental process that provides humans ample opportunity to wreck havoc.

I just don't see any good reason to believe that the tools and precursors to artificial general intelligence are not themselves disruptive technologies.

And in case you believe advanced nanotechnology to be infeasible, but unfriendly AI to be an existential risk, what concrete scenarios do you imagine on how such an AI could cause human extinction without nanotech?

Comment author: KnaveOfAllTrades 19 January 2014 02:15:31PM *  4 points [-]

I think a large part of that may simply be LW'ers being more familiar with UFAI and therefore knowing more details that make it seem like a credible threat / availability heuristic. So for example I would expect e.g. Eliezer's estimate of the gap between the two to be less than the LW average. (Edit: Actually, I don't mean that his estimate of the gap would be lower, but something more like it would seem like less of a non-question to him and he would take nanotech a lot more seriously, even if he did still come down firmly on the side of UFAI being a bigger concern.)