jacob_cannell comments on Tools versus agents - Less Wrong

24 Post author: Stuart_Armstrong 16 May 2012 01:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 17 May 2012 09:20:33AM 2 points [-]

The utility of such systems is crucially constrained by the relevant outside-world-related knowledge you feed into them.

If you feed in only some simple general math axioms, then the system is limited to only discovering results in the domain of abstract mathematics. While useful, this isn't going to change the world.

It only starts getting interesting when you seed it with some physics knowledge. AGI's in sandboxes that have real physics but zero specific earth knowledge are still tremendously useful for solving all kinds of engineering and physics problems.

Eventually though if you give it enough specific knowledge about the earth and humans in particular, it could become potentially vary dangerous.

The type of provably safe boxed AIs you are thinking of have been discussed before, specifically I proposed them here in one of my first main posts, which was also one of my lowest scoring posts.

I still think that virtual sandboxes are the most likely profitable route to safety and haven't been given enough serious consideration here on LW. At some point I'd like to retry that discussion, now that I understand LW etiquitte a little better.

Comment author: gRR 17 May 2012 12:34:31PM 1 point [-]

Solving problems in abstract mathematics can be immensely useful even by itself, I think. Note: physics knowledge at low levels is indistinguishable from mathematics. But the main use of the system would be - safely studying the behavior of a (super-)intelligence, in preparation for a true FAI.

Comment author: jacob_cannell 17 May 2012 12:44:41PM *  0 points [-]

Solving problems in abstract mathematics can be immensely useful even by itself, I think.

Agreed. But the package of ideas entailed by AGI centers around systems that use human level reasoning, natural language understanding, and solve the set of AI-complete problems. The AI-complete problem set can be reduced to finding a compact generative model for natural language knowledge, which really is finding a compact generative model for the universe we observe.

Note: physics knowledge at low levels is indistinguishable from mathematics

Not quite. Abstract mathematics is too general. Useful "Physics knowledge" is a narrow set of mathematics that compactly describe the particular specific universe we observe. This specifity is both crucial and potentially dangerous.

But the main use of the system would be - safely studying the behavior of a (super-)intelligence, in preparation for a true FAI.

A super-intelligence (super-intelligent to us) will necessarily be AI-complete, and thus it must know of our universe. Any system that hopes to understand such a super-intelligence must likewise also know of our universe, simply because "super-intelligent" really means "having super-optimization power over this universe".

Comment author: gRR 17 May 2012 12:53:53PM 1 point [-]

By (super-)intelligence I mean EY's definition, as a powerful general-purpose optimization process. It does not need to actually know about natural language or our universe to be AI-complete. A potential to learn them is sufficient. Abstract mathematics is arbitrarily complex, so sufficiently powerful optimization process in this domain will have to be sufficiently general for everything.

Comment author: jacob_cannell 17 May 2012 01:52:26PM 0 points [-]

In theory we could all live inside an infinite turing simulation right now. In practise any super-intelligences in our universe will need to know of our universe to be super-relevant to our universe.