RobbBB comments on The Robots, AI, and Unemployment Anti-FAQ - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (267)
To self-modify itself into becoming a superintelligence, an AI wouldn't need to terminally value intellectual stimulation in the slightest. It would only need to recognize the instrumental value of learning for attaining its terminal values.
Clippy the paperclip maximizer need derive no pleasure at all from learning for its own sake, but would nonetheless be extremely motivated to learn things (because it recognizes the instrumental value). Clippy never gets bored of just making paperclips, even once it's perfected a very specific method for doing so. A rational AI, especially one that can control its own source code, won't let the joy of learning become a free-floating virtue, a lost purpose. (Unless programmed to do so. 'Programmed to do so' needn't be deliberate, of course.)
I'm not sure what you mean. The boundaries between the AI's body and its environment aren't necessarily well-defined. 'Simply adding a trigger' might be efficient, or might be inefficient, depending on how much oversight is needed to optimize a behavior. And that trigger might be a part of the AI's body, or it might be an independent agent constructed by the AI. If a rational AI behaves in a relatively automatic way (or creates a relatively autonomous agent), that will be because it serves the AI's ultimate goals, not because it serves an unproductive intrinsic love-of-learning.