Squark comments on The Brain as a Universal Learning Machine - Less Wrong

82 Post author: jacob_cannell 24 June 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 01 July 2015 08:48:30PM *  2 points [-]

But it doesn't imply the software architectures have to be similar. For example I see no reason to assume any ULM should be anything like a neural net.

Sure - any general model can simulate any other. Neural networks have strong practical advantages. Their operator base is based on fmads, which is a good match for modern computers. They allow explicit search of program space in terms of the execution graph, which is extremely powerful because it allows one to a priori exclude all programs which don't halt - you can constrain the search to focus on programs with exact known computational requirements.

Neural nets make deep factoring easy, and deep factoring is the single most important huge gain in any general optimization/learning system: it allows for exponential (albeit limited) speedup.

And another thing: teaching an AIs values by placing it in a human environment and counting on reinforcement learning can fail spectacularly if the AIs intelligence grows much faster than that of a human child.

Yes. There are pitfalls, and in general much more research to do on value learning before we get to useful AGI, let alone safe AGI.

A human brain is never going to learn to rearrange its low level circuitry to efficiently perform operations like numerical calculation.

This is arguably a misconception. The brain has a 100 hz clock rate at most. For general operations that involve memory, it's more like 10hz. Most people can do basic arithmetic in less than a second, which roughly maps to a dozen clock cycles or so, maybe less. That actually is comparable to many computers - for example on the current maxwell GPU architecture (nvidia's latest and greatest), even the simpler instructions have a latency of about 6 cycles.

Now, obviously the arithmetic ops that most humans can do in less than a second is very limited - it's like a minimal 3 bit machine. But some atypical humans can do larger scale arithmetic at the same speed.

Point is, you need to compare everything adjusted for the 6 order of magnitude speed difference.

Comment author: Squark 02 July 2015 10:13:29AM 2 points [-]

...They allow explicit search of program space in terms of the execution graph, which is extremely powerful because it allows one to a priori exclude all programs which don't halt - you can constrain the search to focus on programs with exact known computational requirements.

Right. So Boolean circuits are a better analogy than Turing machines.

Neural nets make deep factoring easy, and deep factoring is the single most important huge gain in any general optimization/learning system: it allows for exponential (albeit limited) speedup.

I'm sorry, what is deep factoring? A reference perhaps?

There are pitfalls, and in general much more research to do on value learning before we get to useful AGI, let alone safe AGI.

I completely agree.

This is arguably a misconception. The brain has a 100 hz clock rate at most. For general operations that involve memory, it's more like 10hz...

Good point! Nevertheless, it seems to me very dubious that the human brain can learn to do anything within the limits of its computing power. For example, why can't I learn to look at a page full of exercises in arithmetics and solve all of them in parallel?

Comment author: jacob_cannell 02 July 2015 06:39:51PM 1 point [-]

Right. So Boolean circuits are a better analogy than Turing machines.

They are of course equivalent in theory, but in practice directly searching through a boolean circuit space is much wiser than searching through a program space. Searching through analog/algebraic circuit space is even better, because you can take advantage of fmads instead of having to spend enormous circuit complexity emulating them. Neural nets are even better than that, because they enforce a mostly continous/differentiable energy landscape which helps inference/optimization.

I'm sorry, what is deep factoring? A reference perhaps?

It's the general idea that you can reuse subcomputations amongst models and layers. Solonomoff induction is retarded for a number of reasons, but one is this: it treats every function/model as entirely distinct. So if you have say one high level model which has developed a good cat detector, that isn't shared amongst the other models. Deep nets (of various forms) automatically share submodel components AND subcomputations/subexpressions amongst those submodels. That incredibly, massively speeds up the search. That is deep factoring.

All the successful multi-layer models use deep factoring to some degree. This paper: Sum-Product Networks explains the general idea pretty well.

Good point! Nevertheless, it seems to me very dubious that the human brain can learn to do anything within the limits of its computing power. For example, why can't I learn to look at a page full of exercises in arithmetics and solve all of them in parallel?

There's alot of reasons. First, due to nonlinear foveation your visual system can only read/parse a couple of words/symbols during each saccade - only those right in the narrow center of the visual cone, the fovea. So it takes a number of clock cycles or steps to scan the entire page, and your brain only has limited working memory to put stuff in.

Secondly, the bigger problem is that even if you already know how to solve a math problem, just parsing many math problems requires a number of steps, and then actually solving them - even if you know the ideal algorithm that requires the minimal number of steps - that minimal number of steps can still be quite large.

Many interesting problems still require a number of serial steps to solve, even with an infinite parallel machine. Sorting is one simple example.

Comment author: Squark 08 July 2015 03:15:33PM *  0 points [-]

...Neural nets are even better than that, because they enforce a mostly continous/differentiable energy landscape which helps inference/optimization.

I wonder whether this is a general property or is the success of continuous methods limited to problem with natural continuous models like vision.

Deep nets (of various forms) automatically share submodel components AND subcomputations/subexpressions amongst those submodels.

Yes, this is probably important.

First, due to nonlinear foveation your visual system can only read/parse a couple of words/symbols during each saccade - only those right in the narrow center of the visual cone, the fovea. So it takes a number of clock cycles or steps to scan the entire page, and your brain only has limited working memory to put stuff in.

Scanning the page is clearly not the bottleneck: I can read the page much faster than solve the exercises. "Limited working memory" sounds a claim that higher cognition has much less computing resources than low level tasks. Clearly visual processing requires much more "working memory" than solving a couple of dozens of exercises in arithmetic. But if we accept this constraint then does the brain still qualify for a ULM? It seems to me that if there is a deficiency of the brain's architecture that prevents higher cognition from enjoying the brain's full power, solving this deficiency definitely counts as an "architectural innovation".