You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

amcknight comments on Is an Intelligence Explosion a Disjunctive or Conjunctive Event? - Less Wrong Discussion

12 Post author: XiXiDu 14 November 2011 11:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (15)

You are viewing a single comment's thread.

Comment author: amcknight 15 November 2011 02:43:12AM *  2 points [-]

Nice well-written post. You definitely show the possibility that AI risk is unlikely, because recursive self-improvement could be a conjunctive scenario. But without a better sketch of what conjunctions are required for recursive self-improvement (or AGI), you've only succeeded in keeping the possibility open without actually arguing for a lack of risk. I think you've created a great starting point Hypothetical Apostasy for those here that believe strongly in the SIAI. Ultimately though, a healthy discussion about any actual conjunctions involved is what it now takes to decide whether there are risks from AI.

My (10 minutes attempted) challenge to whether there exists a conjunction:

  • Self-improvement is a useful instrumental goal for most imaginable systems with goals.
  • Recursive improvement is implied by the huge room for improvement of... pretty much anything, but specifically, systems with goals. (EDIT: XiXiDu's next post addresses and disagrees with this)
  • AI programmers are creating systems with goals.
  • One might some day be powerful/intelligent enough to realize many of its instrumental goals.

That seems to be all it takes. Are there other relevant factors I'm forgetting? I'd say the first 3 have a probability of .98+. The 4th is what SIAI is trying to deal with.