Nornagest comments on [LINK] Wait But Why - The AI Revolution Part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (87)
Clippy and other thought experiments in its genre depend on a solution to the value stability problem, without which the goals of self-modifying agents tend to collapse into a loose equivalent of wireheading. That just doesn't get as much attention, both because it's less dramatic and because it's far less dangerous in most implementations.
That's helpful to know. I just missed the assumption that wireheading doesn't happen and now we're more interested in what happens next.
Can you elaborate on this or provide link(s) to further reading?