OrphanWilde comments on Evaluating the feasibility of SI's plan - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
Hence my second paragraph: Goals are inherently dangerous things to give AIs. Especially open-ended goals which would require an ever-better intelligence to resolve.
AIs that can't be described by attributing goals to them don't really seem too powerful (after all, intelligence is about making the world going into some direction; this is the only property that tells apart an AGI from a rock).
Evolution and capitalism are both non-goal-oriented, extremely powerful intelligences. Goals are only one form of motivators.