You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on FAI FAQ draft: general intelligence and greater-than-human intelligence - Less Wrong Discussion

1 Post author: lukeprog 23 November 2011 07:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: shminux 23 November 2011 10:36:24PM *  1 point [-]

I am not sold on your definition of intelligence:

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.

That will be our ‘working definition’ for intelligence in this FAQ.

Does this mean that viruses and cockroaches are more intelligent than humans? They can certainly achieve their goals (feeding and multiplying) in a "wide range of environments", much wider than humans. Well, maybe not in space.

I suspect that there should be a better definition. Wikipedia mentions abstract thought and other intangibles, but concedes that there is little agreement: " Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions."

The standard cop out "I know intelligence when I see it" is not very helpful, either.

I understand the need to have a discussion of AGI in the FAI FAQ, but I am skeptical that a critically minded person would settle for the definition you have given. Something general, measurable and not confused with a bacterial infection would be a good target.

Comment author: amcknight 23 November 2011 10:45:10PM 0 points [-]

Here's an easy fix:

Intelligence measures an agent's ability to achieve a wide range of goals in a wide range of environments.

Comment author: Vladimir_Nesov 23 November 2011 10:57:32PM *  0 points [-]

Intelligence measures an agent's ability to achieve a wide range of goals in a wide range of environments.

One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it's more about being able to solve unexpected subproblems.

Comment author: shokwave 24 November 2011 11:05:55AM 1 point [-]

You could consider other possible worlds and other possible goals and see if the agent could also achieve those.

Comment author: amcknight 06 December 2011 07:13:05AM 0 points [-]

If you count a subgoal as a type of goal then my fix still works well.

Comment author: RomeoStevens 24 November 2011 01:57:39AM *  0 points [-]

perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.

This doesn't give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can't seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.