Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

alex_zag_al comments on Permitted Possibilities, & Locality - Less Wrong

11 Post author: Eliezer_Yudkowsky 03 December 2008 09:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

Sort By: Old

You are viewing a single comment's thread.

Comment author: alex_zag_al 14 November 2015 09:14:48AM *  0 points [-]

Here are some things that shouldn't happen, on my analysis: An ad-hoc self-modifying AI as in (1) undergoes a cycle of self-improvement, starting from stupidity, that carries it up to the level of a very smart human - and then stops, unable to progress any further.

I'm sure this has been discussed elsewhere, but to me it seems possible that progress may stop when the mind becomes too complex to make working changes to.

I used to think that a self-improving AI would foom because as it gets smarter, it gets easier for it to improve itself. But it may get harder for it to improve itself, because as it self-improves it may turn itself into more and more of an unmaintainable mess.

What if creating unmaintainable messes is the only way that intelligences up to very-smart-human-level know how to create intelligences up to very-smart-human level? That would make that level a hard upper limit on a self-improving AI.