JoshuaFox comments on Evaluating the feasibility of SI's plan - Less Wrong

25 Post author: JoshuaFox 10 January 2013 08:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: DaFranker 10 January 2013 04:02:39PM *  0 points [-]

There is only one simple requirement for any AI to begin recursive self-improvement: Learning of the theoretical possibility that more powerful or efficient algorithms, preferably with even more brainpower, could achieve the AI's goals or raise its utility levels faster than what it's currently doing.

Going from there to "Let's create a better version of myself because I'm the current most optimal algorithm I know of" isn't such a huge step to make as some people seem to implicitly believe, as long as the AI can infer its own existence or is self-aware in any manner.

Comment author: OrphanWilde 10 January 2013 05:46:02PM 1 point [-]

Hence my second paragraph: Goals are inherently dangerous things to give AIs. Especially open-ended goals which would require an ever-better intelligence to resolve.

Comment author: latanius 11 January 2013 02:53:13AM 0 points [-]

AIs that can't be described by attributing goals to them don't really seem too powerful (after all, intelligence is about making the world going into some direction; this is the only property that tells apart an AGI from a rock).

Comment author: OrphanWilde 11 January 2013 08:59:01PM 1 point [-]

Evolution and capitalism are both non-goal-oriented, extremely powerful intelligences. Goals are only one form of motivators.