You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Squark comments on Computation complexity of AGI design - Less Wrong Discussion

6 Post author: Squark 02 February 2015 08:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (69)

You are viewing a single comment's thread. Show more comments above.

Comment author: Squark 05 February 2015 07:34:49AM 0 points [-]

I'm not sure what kings of pattern matching you have in mind. Possibly you mean the sort of "pattern matching" typical to narrow AI e.g. a neural network firing with response to a certain "pattern". Usually it is just clustering by dividing spaces with hypersurfaces constructed from some (very restricted) set of mathematical primitives. This sort of "pattern matching" is probably very important to the low-level working of the brain and the operation of unconscious systems such as visual processing, however it is not what I mean by "pattern matching" in the current context. I am referring to patterns expressible by language or logic. For example, if you see all numbers in a sequence are prime, it is a pattern. If you see all shapes in a collection are convex, it is a pattern. This sort of patterns are of fundamental importance in conscious thought and IMO can only be modeled mathematical by constructions akin to Solomonoff induction and Kolmogorov complexity.

There is a large multiplicative factor but it's not what you're describing.

On each step, AIXIt loops over all programs of given length and selects the one with the best performance in the current scenario. This is exponential in the length of the programs.

Comment author: passive_fist 05 February 2015 07:51:58PM 0 points [-]

About the first point on pattern matching, I suggest you look at statistical inference techniques like GMM, DPGMM, LDA, LSA, or BM/RBMs. None of these have to do with Solomonoff induction, and they are more than capable of learning 'patterns expressible by language or logic,' yet they seem to more closely correspond to what the brain does than Solomonoff induction.

On each step, AIXIt loops over all programs of given length and selects the one with the best performance in the current scenario

Not precisely, but anyway that doesn't lead to 2^{length of program emulating human brain} for human-level intelligence.