Aleksei_Riikonen comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
New here :(
But how do they plan to stop an AI appocalypse, or is that one of those things they haven't figured out yet? I think the best bet would be to create AI first, and then use it to make safe AI as well as create plans for stopping an AI appocalypse.
I recommend you read the "Brief Introduction" mentioned in the posting you're commenting:
http://singinst.org/riskintro/index.html