jacob_cannell comments on Anthropomorphic AI and Sandboxed Virtual Universes - Less Wrong

-3 Post author: jacob_cannell 03 September 2010 07:02PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (123)

You are viewing a single comment's thread. Show more comments above.

Comment author: jacob_cannell 04 September 2010 10:47:05PM *  1 point [-]

I had a longer reply, but unfortunately my computer was suddenly attacked by some wierd virus (yes really), and had to reboot.

Your line of thought investigates some of my assumptions that would require lengthier expositions to support, but I'll just summarize here (and may link to something else relevant when i dig it up).

You have only compared us to some of our first attempts to create new beings, within an infinite series of possibilities.

The set of any programs for a particular problem is infinite, but this irrelevant. There are an infinite number of programs for sorting a list of numbers. All of them suck for various reasons, and we are left with just a couple provably best algorithms (serial and parallel).

There appears to be a single program underlying our universe - physics. We have reasonable approximations to it at different levels of scale. Our simulation techniques are moving towards a set of best approximations to our physics.

Intelligence itself is a form of simulation of this same physics. Our brain appears to use (in the cortex) a universal data-driven approximation of this universal physics.

So the space of intelligent algorithms is infinite, but the are just a small set of universal intelligent algorithms derived from our physics which are important.

And for that matter... Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Not really.

Imagine if you took a current CPU back in time 10 years ago. Engineers then wouldn't be able to build it immediately, but it would accelerate their progress significantly.

The brain in some sense is like an AGI computer from the future. We can't build it yet, but we can use it to accelerate our technological evolution towards AGI.

Also .. brain != mind

Comment author: timtyler 04 September 2010 11:38:02PM *  0 points [-]

Unless we understand exactly how a human brain works, how can we improve its efficiency? Reverse engineering a system is often harder than making one from scratch.

Not really.

Yet aeroplanes are not much like birds, hydraulics are not much like muscles, loudspeakers are not much like the human throat, microphones are not much like the human ear - and so on.

Convergent evolution wins sometimes - for example, eyes - but we can see that this probably won't happen with the brain - since its "design" is so obviously xxxxxd up.

Comment author: jacob_cannell 05 September 2010 01:38:40AM -1 points [-]

Yet aeroplanes are not much like birds,

Airplanes exploit one single simple principle (from a vast set of principles) that birds use - aerodynamic lift.

If you want a comparison like that - then we already have it. Computers exploit one single simple principle from the brain - abstract computation (as humans were the original computers and are turing complete) - and magnify it greatly.

But there is much more to intelligence than just that one simple principle.

So building an AGI is much close to building an entire robotic bird.

And that really is the right level of analogy. Look at the complexity of building a complete android - really analyze just the robotic side of things, and there is no one simple magic principle you can exploit to make some simple dumb system which amplifies it to the Nth degree. And building a human or animal level robotic body is immensely complex.

There is not one simple principle - but millions.

And the brain is the most complex part of building a robot.

Comment author: timtyler 05 September 2010 01:45:09AM *  1 point [-]

But there is much more to intelligence than just that one simple principle.

Reference? For counter-reference, see:

http://www.hutter1.net/ai/uaibook.htm#oneline

That looks a lot like the intellectual equivalent of "lift" to me.

An implementation may not be that simple - but then aeroplanes are not simple either.

The point was not that engineered artefacts are simple, but that they are only rarely the result of reverse engineering biological entities.

Comment author: jacob_cannell 05 September 2010 10:14:06PM 0 points [-]

I'll take your point and I should have said "there is much more to practical intelligence" than just one simple principle - because yes at the limits I agree that universal intelligence does have a compact description.

AIXI is related to finding a universal TOE - a simple theory of physics, but that doesn't mean it is actually computationally tractable. Creating a practical, efficient simulation involves a large series of principles.