Gunnar_Zarncke comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
If the outputs are look like a pro-human friendly AI, then you have what you want and just leave it in the sandbox. It does all you want doesn't it?