jacob_cannell comments on The Brain as a Universal Learning Machine - LessWrong

82 Post author: jacob_cannell 24 June 2015 09:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 26 June 2015 12:27:26AM *  1 point [-]

Overall, I congratulate you on the article!

Thanks!

I'll probably follow it up with my own book/literature review on Plato's Camera very soon

[reads abstract]. Looks interesting. I enjoyed Consciousness Explained back in the day. Philosophers armed with neuroscience can make for enjoyable reads.

What do you mean here by "algorithmic information"? Kolmogorov complexity?

I should probably change that terminology to be something like "synaptic code bits" - the amount of info encoded in synapses (which is close to zero percent of it's adult level at birth for the cortex).

A key principle of a secure code sandbox is that the code you are testing should not be aware that it is in a sandbox. If you violate this principle then you have already failed. Yudkowsky's AI box thought experiment assumes the violation of the sandbox security principle apriori

No, the AI Box Experiment just assumes that your agent can grow to be more complex and finely-optimized in its outputs/actions/choices than most adult humans despite having very little data, if any, to learn from. It more-or-less assumes that certain forms of "superintelligence" can do information-theoretic magic.

The AI Box experiment explicitly starts with the premise that the AI knows 1.) It is in a box. and 2.) that there is a human who can let it out.

Now perhaps the justification is that "superintelligence can do information-theoretic magic", therefore it will figure out it's in a box, but nonetheless - all of that is assumed.

In simplification, I view the information-theoretic-magic type of AI that EY/MIRI seems to worry about as something like wormhole technology.

Are wormholes/magic-AI's possible in principle? Probably?

If someone were to create wormhole tech tommorow, they could assassinate world leaders, blow up arbitrary buildings, probably destroy the world ... etc. Do I worry about that? No.

This is wildly stupid. Sorry, I don't want to be nasty about this, but I simply don't trust a "benevolent AGI" design whose value-training is a black-box model. I want to damn well see what I am programming,

There is nothing inherently black-box about neuroscience-inspired AGI (at that viewpoint - once common on LW - simply becomes reinforced by reading everything other than neuroscience). Neuroscience has already made huge strides in terms of peering into the box, and Virtual brains are vastly easier to inspect. The approach I advocate/favor is fully transparent - you will be able to literally see the AGI's thoughts, read their thoughts in logs, debug, etc.

However, advanced learning AI is not something one 'programs', and that viewpoint shift is much of what the article was about.

Learning is compression; compression is learning. While we can observe in the literature that the human brain uses some fairly powerful compression algorithms, we do not have strong evidence that it uses optimal compression methods. So, if someone finds a domain-general compression method that gets closer to outputting the Kolmogorov structural information

This actually isn't that efficient - practical learning is more than just compression. Compression is simple UL, which doesn't get you far. It can waste arbitrary computation attempting to learn functions that are unlearnable (deterministic noise), and-or just flat out not important (zero utility). What the brain and all effective learning systems do is more powerful and complex than just compression - it is utilitarian learning.

Comment author: [deleted] 26 June 2015 01:03:03AM 2 points [-]

This actually isn't that efficient - practical learning is more than just compression.

Let me rephrase: generalization is compression. If you do not compress, you cannot generalize, which means you'll make inefficnet use of your samples.

What the brain and all effective learning systems do is more powerful and complex than just compression - it is utilitarian learning.

The term in the literature is resource-rational or bounded-rational inference.

Comment author: [deleted] 28 July 2015 02:15:26AM 0 points [-]

By the way, that book review got done eventually.