You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

amcknight comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong Discussion

3 Post author: XiXiDu 14 November 2011 11:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (94)

You are viewing a single comment's thread. Show more comments above.

Comment author: amcknight 15 November 2011 07:25:59PM 1 point [-]

I don't really see why solving these kinds of difficult problems is relevant. A system could still recursively self-improve to solve a vast number of easier problems. That being said, I'd probably still be interested in anything relating complexity classes to intelligence.

Comment author: JoshuaZ 16 November 2011 04:14:05PM *  0 points [-]

A system could still recursively self-improve to solve a vast number of easier problems.

Well, the point here is that it looks like the problems involved with recursive self-improvement themselves fall into the difficult classes. For example, designing circuitboards involves a version of the traveling salesman problem which is NP-complete. Similarly, memory management and design involves graph coloring which is also NP complete.