Squark comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
If you place too many restrictions you will probably never reach human-like intelligence.
If you do it frequently you won't reach human-like intelligence in a reasonable span of time. If you do it infrequently, you will miss the transition into superhuman and it will be too late.
A coherent, large, well-funded effort can still make a fatal mistake. The Challenger was such an effort. The Chernobyl power plant was such an effort. Trouble is, this time the stakes are much higher.