Punoxysm comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
It's just a little color and context.
You don't have to believe anything except that either (or both):
We gain the capacity to develop AI before we can guarantee friendliness, and some organization attempts to develop maybe-unsafe AI
Redundant safety measures are good practice.