Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

timtyler comments on Stupid Questions Open Thread - Less Wrong Discussion

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 30 December 2011 01:26:07PM *  8 points [-]

Intelligence is optimization power divided by the resources used.

I checked with: A Collection of Definitions of Intelligence.

Out of 71 definitions, only two mentioned resources:

“Intelligence is the ability to use optimally limited resources – including time – to achieve goals.” R. Kurzweil

“Intelligence is the ability for an information processing system to adapt to its environment with insufficient knowledge and resources.” P. Wang

The paper suggests that the nearest thing to a consensus is that intelligence is about problem-solving ability in a wide range of environments.

Yes, Yudkowsky apparently says otherwise - but: so what?

Comment author: endoself 30 December 2011 07:58:37PM 2 points [-]

I don't think he really said this. The exact quote is

If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used. Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources. Intelligence, in other words, is efficient optimization.

This seems like just a list of different measurements trying to convey the idea of efficiency.

When we want something to be efficient, we really just mean that we have other things to use our resources for. The right way to measure this is in terms of the marginal utility of the other uses of resources. Efficiency is therefore important, but trying to calculate efficiency by dividing is oversimplifying.

Comment author: orthonormal 30 December 2011 06:26:48PM 1 point [-]

What about a giant look-up table, then?

Comment author: Solvent 31 December 2011 02:28:19PM 2 points [-]

That requires lots of computing resources. (I think that's the answer.)

Comment author: timtyler 30 December 2011 08:12:51PM 0 points [-]

What about a giant look-up table, then?

That would surely be very bad at solving problems in a wide range of environments.

Comment author: orthonormal 30 December 2011 09:18:51PM 0 points [-]

For any agent, I can create a GLUT that solves problems just as well (provided the vast computing resources necessary to store it), by just duplicating that agent's actions in all of its possible states.

Comment author: timtyler 30 December 2011 10:20:06PM *  0 points [-]

Surely its performance would be appalling on most problems - vastly inferior to a genuinely intellligent agent implemented with the same hardware technology - and so it will fail to solve many of the problems with time constraints. The idea of a GLUT seems highly impractical. However, if you really think that it would be a good way to construct an intelligent machine, go right ahead.

Comment author: orthonormal 30 December 2011 11:27:23PM 4 points [-]

vastly inferior to a genuinely intellligent agent implemented with the same hardware technology

I agree. That's the point of the original comment- that "efficient use of resources" is as much a factor in our concept of intelligence as is "cross-domain problem-solving ability". A GLUT could have the latter, but not the former, attribute.

Comment author: timtyler 31 December 2011 01:53:42PM *  1 point [-]

"Cross-domain problem-solving ability" implicitly includes the idea that some types of problem may involve resource constraints. The issue is whether that point needs further explicit emphasis - in an informal definition of intelligence.

Comment author: mathemajician 30 December 2011 07:34:58PM 0 points [-]

Sure, if you had an infinitely big and fast computer. Of course, even then you still wouldn't know what to put in the table. But if we're in infinite theory land, then why not just run AIXI on your infinite computer?

Back in reality, the lookup table approach isn't going to get anywhere. For example, if you use a video camera as the input stream and after just one frame of data your table would already need something like 256^1000000 entries. The observable universe only has 10^80 particles.

Comment author: orthonormal 30 December 2011 09:13:04PM 2 points [-]

You misunderstand me. I'm pointing out that a GLUT is an example of something with (potentially) immense optimization power, but whose use of computational resources is ridiculously prodigal, and which we might hesitate to call truly intelligent. This is evidence that our concept of intelligence does in fact include some notion of efficiency, even if people don't think of this aspect without prompting.

Comment author: mathemajician 30 December 2011 09:31:06PM 0 points [-]

Right, but the problem with this counter example is that it isn't actually possible. A counter example that could occur would be much more convincing.

Personally, if a GLUT could cure cancer, cure aging, prove mind blowing mathematical results, write a award wining romance novel, take over the world, and expand out to take over the universe... I'd be happy considering it to be extremely intelligent.

Comment author: orthonormal 30 December 2011 09:39:48PM 3 points [-]

It's infeasible within our physics, but it's possible for (say) our world to be a simulation within a universe of vaster computing power, and to have a GLUT from that world interact with our simulation. I'd say that such a GLUT was extremely powerful, but (once I found out what it really was) I wouldn't call it intelligent- though I'd expect whatever process produced it (e.g. coded in all of the theorem-proof and problem-solution pairs) to be a different and more intelligent sort of process.

That is, a GLUT is the optimizer equivalent of a tortoise with the world on its back- it needs to be supported on something, and it would be highly unlikely to be tortoises all the way down.