Error comments on How to Study Unsafe AGI's safely (and why we might have no choice) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (47)
Discussions about AIs modifying their own source code always remind me of Reflections on Trusting Trust, which demonstrates an evil self-modifying (or self-preserving, I should say) backdoored compiler.
(for my part, I'm mostly convinced that self-improving AI is incredibly dangerous; but not that it is likely to happen in my lifetime.)