jacob_cannell comments on The Brain as a Universal Learning Machine - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (166)
Thanks!
[reads abstract]. Looks interesting. I enjoyed Consciousness Explained back in the day. Philosophers armed with neuroscience can make for enjoyable reads.
I should probably change that terminology to be something like "synaptic code bits" - the amount of info encoded in synapses (which is close to zero percent of it's adult level at birth for the cortex).
The AI Box experiment explicitly starts with the premise that the AI knows 1.) It is in a box. and 2.) that there is a human who can let it out.
Now perhaps the justification is that "superintelligence can do information-theoretic magic", therefore it will figure out it's in a box, but nonetheless - all of that is assumed.
In simplification, I view the information-theoretic-magic type of AI that EY/MIRI seems to worry about as something like wormhole technology.
Are wormholes/magic-AI's possible in principle? Probably?
If someone were to create wormhole tech tommorow, they could assassinate world leaders, blow up arbitrary buildings, probably destroy the world ... etc. Do I worry about that? No.
There is nothing inherently black-box about neuroscience-inspired AGI (at that viewpoint - once common on LW - simply becomes reinforced by reading everything other than neuroscience). Neuroscience has already made huge strides in terms of peering into the box, and Virtual brains are vastly easier to inspect. The approach I advocate/favor is fully transparent - you will be able to literally see the AGI's thoughts, read their thoughts in logs, debug, etc.
However, advanced learning AI is not something one 'programs', and that viewpoint shift is much of what the article was about.
This actually isn't that efficient - practical learning is more than just compression. Compression is simple UL, which doesn't get you far. It can waste arbitrary computation attempting to learn functions that are unlearnable (deterministic noise), and-or just flat out not important (zero utility). What the brain and all effective learning systems do is more powerful and complex than just compression - it is utilitarian learning.
Let me rephrase: generalization is compression. If you do not compress, you cannot generalize, which means you'll make inefficnet use of your samples.
The term in the literature is resource-rational or bounded-rational inference.
By the way, that book review got done eventually.