You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

olalonde comments on Irrationality Game II - Less Wrong Discussion

13 [deleted] 03 July 2012 06:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (380)

You are viewing a single comment's thread. Show more comments above.

Comment author: TimS 04 July 2012 11:53:38PM 2 points [-]

To follow up on what olalonde said, there are problems that appear to get extraordinarily difficult as the number of inputs increases. Wikipedia suggests that the know best solutions to the traveling salesman problem is on the order of O(2^n), where n is the number of inputs. Saying that adding computational ability resolves these issues for actual AGI implies either:

1) AGI trying to FOOM won't need to solve problems as complicated as traveling salesman type problems, or

2) AGI trying to FOOM will be able to add processing power at a rate reasonably near O(2^n), or

3) In the process of FOOM, an AGI will be able to determine P=NP or similarly revolutionary result.

None of those seem particularly plausible to me. So for reasonable sized n, AGI will not be able to solve problems appreciably better than humans.

Comment author: olalonde 05 July 2012 11:38:49AM *  1 point [-]

I think 1 is the most likely scenario (although I don't think FOOM is a very likely scenario). Some more mind blowing hard problems are available here for those who are still skeptical: http://en.wikipedia.org/wiki/Transcomputational_problem