billswift comments on Rationality Quotes: October 2009 - Less Wrong

7 Post author: Eliezer_Yudkowsky 22 October 2009 04:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (276)

You are viewing a single comment's thread. Show more comments above.

Comment author: billswift 24 October 2009 04:18:10PM 0 points [-]

The problem is in the size of the system, relative to human cognition. Using specialization and management can increase the size of the system we can manage, but not without limit. That is why a self-improving AI is a potential threat, it can increase the size of the system it can manage well beyond what we can understand. It is also why I don't think provably Friendly AI is possible (though I hope I am wrong about that) and that GAI will be developed incrementally from specialized AIs or from general but less than intelligent systems. Also it is what gives me some hope for intelligence amplification to keep up with GAIs, at least for a while; we don't need to start from scratch, just keep improving the size of systems we can manage.

Comment author: Vladimir_Nesov 24 October 2009 04:42:11PM *  2 points [-]

Control and knowledge don't care about scale. One can learn stuff about whole galaxies by observing them. When you want to "manage" an AI, the complexity of your concern is restricted to the complexity of your wish.

Comment author: billswift 24 October 2009 08:45:33PM 0 points [-]

Size in describing a system isn't about scale, it's the number of interacting components and the complexity of their interactions. And I don't understand what you mean in your second sentence, it doesn't make sense to me.

Comment author: Vladimir_Nesov 24 October 2009 09:27:51PM *  0 points [-]

A galaxy also isn't "just" about scale: it does contain more stuff, more components (but how do you know that and what does it mean?). Second sentence: using a telescope to make precise observations.