JoshuaZ comments on Q&A with new Executive Director of Singularity Institute - Less Wrong

26 Post author: lukeprog 07 November 2011 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (177)

You are viewing a single comment's thread.

Comment author: JoshuaZ 07 November 2011 06:15:28AM *  8 points [-]

Since a powerful AI would likely spread its influence through its future lightcone, rogue AI are not likely to be a major part of the Great Filter (although Doomsday Argument style anthropic reasoning/ observer considerations do potentially imply problems in the future of which could include AI). One major suggested existential risk/filtration issue is nanotech. Moreover, easy nanotech is a major part of many scenarios of AIs going foom. Given this, should the SIAI be evaluating the practical limitations and risks of nanotech, or are there enough groups already doing so?

Comment author: timtyler 07 November 2011 01:25:10PM 1 point [-]

The first point looks like this one. The case for the Doomsday Argument implying problems looks weak to me. It just says that there (probably) won't be lots of humans around in the future. However, IMO, that is pretty obvious - humans are unlikely to persist far into an engineered future.