JoshuaZ comments on The hard limits of hard nanotech - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
These scenarios assume an AGI directing them. And an unfriendly AGI is an existential risk with or without nano.
It might be a general existential risk but without nanotech the space of things that an unfriendly AGI can do goes down a lot. Lack of practical nanotech reduces chance to FOOM.