OrphanWilde comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: OrphanWilde 10 January 2013 05:46:02PM 1 point [-]

Hence my second paragraph: Goals are inherently dangerous things to give AIs. Especially open-ended goals which would require an ever-better intelligence to resolve.

Comment author: latanius 11 January 2013 02:53:13AM 0 points [-]

AIs that can't be described by attributing goals to them don't really seem too powerful (after all, intelligence is about making the world going into some direction; this is the only property that tells apart an AGI from a rock).

Comment author: OrphanWilde 11 January 2013 08:59:01PM 1 point [-]

Evolution and capitalism are both non-goal-oriented, extremely powerful intelligences. Goals are only one form of motivators.