eli_sennesh comments on MIRI's Approach - Less Wrong

34 Post author: So8res 30 July 2015 08:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 July 2015 11:53:10PM 3 points [-]

Human intelligence - including that of Turing or Einstein, only requires 10 watts of energy and more surprisingly only around 10^14 switches/second or less - which is basically miraculous. A modern GPU uses more than 10^18 switches/second. You'd have to go back to a pentium or something to get down to 10^14 switches per second. Of course the difference is that switch events in an ANN are much more powerful because they are more like memory ops, but still.

It's not that amazing when you understand PAC-learning or Markov processes well. A natively probabilistic (analogously: "natively neuromorphic") computer can actually afford to sacrifice precision "cheaply", in the sense that sizeable sacrifices of hardware precision actually entail fairly small injections of entropy into the distribution being modelled. Since what costs all that energy in modern computers is precision, that is, exactitude, a machine that simply expects to get things a little wrong all the time can still actually perform well, provided it is performing a fundamentally statistical task in the first place -- which a mind is!

Comment author: jacob_cannell 31 July 2015 02:01:19AM *  1 point [-]

Eli this doesn't make sense - the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.

The energy efficiency per op in the brain is rather poor in one sense - perhaps 10^5 larger than the minimum imposed by physics for a low SNR analog op, but essentially all of this cost is wire energy.

The miraculous thing is how much intelligence the brain/mind achieves for such a tiny amount of computation in terms of low level equivalent bit ops/second. It suggests that brain-like ANNs will absolutely dominate the long term future of AI.

Comment author: [deleted] 31 July 2015 04:00:58AM 1 point [-]

Eli this doesn't make sense - the fact that digital logic switches are higher precision and more powerful and thus require more minimal energy makes the brain/mind more impressive, not less.

Nuh-uh :-p. The issue is that the brain's calculations are probabilistic. When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.

Hence why the brain is, on the one hand, very impressive for extracting inferential power from energy and mass, but on the other hand, "not that amazing" in the sense that it, too, begins to add up to normality once you learn a little about how it works.

Comment author: jacob_cannell 31 July 2015 04:10:49AM 1 point [-]

When doing probabilistic calculations, you can either use very, very precise representations of computable real numbers to represent the probabilities, or you can use various lower-precision but natively stochastic representations, whose distribution over computation outcomes is the distribution being inferred.

Of course - and using say a flop to implement a low precision synaptic op is inefficient by six orders of magnitude or so - but this just strengthens my point. Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore's Law.

Comment author: [deleted] 31 July 2015 04:17:23AM 1 point [-]

Neuromorphic brain-like AGI thus has huge potential performance improvement to look forward to, even without Moore's Law.

Yes, if you could but dissolve your concept of "brain-like"/"neuromorphic" into actual principles about what calculations different neural nets embody.