V_V comments on Why AI may not foom - Less Wrong

23 Post author: John_Maxwell_IV 24 March 2013 08:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: V_V 25 March 2013 05:22:08PM 1 point [-]

The first AGI is likely to be using a lot of relatively new and suboptimal algorithms almost by definition of "first".

Why?

Comment author: Manfred 25 March 2013 06:41:40PM 2 points [-]

Because optimality isn't actually required, and humans are bad at perfection.

Comment author: V_V 25 March 2013 07:44:29PM 2 points [-]

Yes, but non-perfect doesn't imply that there is much room for improvement

Comment author: roystgnr 25 March 2013 08:10:20PM 1 point [-]

I was handwaving a bit there, huh? "Some relatively new algorithm(s)" would have been true by definition; everything else needs a bit more justification:

"A lot of relatively new": whatever makes the difference between problem-specific AI and general unknown-problem-handling AGI is going to be new. The harder these subproblems are (and I'd say they're likely to be hard), the more difficult new algorithms are going to be required.

"suboptimal": just by induction, what percentage of the time do we immediately hit on an optimal algorithm to solve a complicated problem, and do we expect the problems in AGI to be harder or easier than most of this reference class? Even superficially simple problems with exact solutions like sorting have a hundred algorithms whose optimality varies depending on the exact application and hardware. Hard problems with approximate solutions like uncertainty quantification are even worse. The people I know doing state-of-the-art work with Bayesian inverse problems are still mostly using (accelerated variants of) Monte Carlo, despite general agreement with that old quote about how Monte Carlo is the way you solve problems when you don't yet know the right way to solve them.