wedrifid comments on Link: Interview with Vladimir Vapnik - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (6)
Scary thought: What if the rules for AI are very complex so as to make it impossible to build one or prove that an AI will be stable and or friendly? If this turns out to be the case then the singularity will never happen and we have an explanation for the fermi paradox.
Nah, it's all good. We'll just 'shut up and' 'use the try harder'.