V_V comments on The Brain as a Universal Learning Machine - Less Wrong

82 Post author: jacob_cannell 24 June 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 01 July 2015 08:48:30PM *  2 points [-]

But it doesn't imply the software architectures have to be similar. For example I see no reason to assume any ULM should be anything like a neural net.

Sure - any general model can simulate any other. Neural networks have strong practical advantages. Their operator base is based on fmads, which is a good match for modern computers. They allow explicit search of program space in terms of the execution graph, which is extremely powerful because it allows one to a priori exclude all programs which don't halt - you can constrain the search to focus on programs with exact known computational requirements.

Neural nets make deep factoring easy, and deep factoring is the single most important huge gain in any general optimization/learning system: it allows for exponential (albeit limited) speedup.

And another thing: teaching an AIs values by placing it in a human environment and counting on reinforcement learning can fail spectacularly if the AIs intelligence grows much faster than that of a human child.

Yes. There are pitfalls, and in general much more research to do on value learning before we get to useful AGI, let alone safe AGI.

A human brain is never going to learn to rearrange its low level circuitry to efficiently perform operations like numerical calculation.

This is arguably a misconception. The brain has a 100 hz clock rate at most. For general operations that involve memory, it's more like 10hz. Most people can do basic arithmetic in less than a second, which roughly maps to a dozen clock cycles or so, maybe less. That actually is comparable to many computers - for example on the current maxwell GPU architecture (nvidia's latest and greatest), even the simpler instructions have a latency of about 6 cycles.

Now, obviously the arithmetic ops that most humans can do in less than a second is very limited - it's like a minimal 3 bit machine. But some atypical humans can do larger scale arithmetic at the same speed.

Point is, you need to compare everything adjusted for the 6 order of magnitude speed difference.

Comment author: V_V 02 July 2015 01:08:13PM *  0 points [-]

This is arguably a misconception. The brain has a 100 hz clock rate at most. For general operations that involve memory, it's more like 10hz.

Mechanical calculators were slower than that, and still they were very much better at numeric computation than most humans, which made them incredibly useful.

Now, obviously the arithmetic ops that most humans can do in less than a second is very limited - it's like a minimal 3 bit machine. But some atypical humans can do larger scale arithmetic at the same speed.

Indeed these are very rare people. The vast majority of people, even if they worked for decades in accounting, can't learn to do numeric computation as fast and accurately as a mechanical calculator does.

Comment author: jacob_cannell 02 July 2015 06:24:26PM 0 points [-]

The vast majority of people, even if they worked for decades in accounting, can't learn to do numeric computation as fast and accurately as a mechanical calculator does.

The problems aren't even remotely comparable. A human is solving a much more complex problem - the inputs are in the form of visual or auditory signals which first need to be recognized and processed into symbolic numbers. The actual computation step is trivial and probably only involves a handful or even a single cycle.

I admit that I somewhat let you walk into this trap by not mentioning it earlier ... this example shows that the brain can learn near optimal (in terms of circuit depth or cycles) solutions for these simple arithmetic problems. The main limitation is that the brain's hardware is strongly suited to approximate inference problems, and not exact solutions, so any exact operators require memoization. This is actually a good thing, and any practical AGI will need to have a similar prior.