You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on FAI FAQ draft: general intelligence and greater-than-human intelligence - Less Wrong Discussion

1 Post author: lukeprog 23 November 2011 07:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 23 November 2011 10:57:32PM *  0 points [-]

Intelligence measures an agent's ability to achieve a wide range of goals in a wide range of environments.

One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it's more about being able to solve unexpected subproblems.

Comment author: shokwave 24 November 2011 11:05:55AM 1 point [-]

You could consider other possible worlds and other possible goals and see if the agent could also achieve those.

Comment author: amcknight 06 December 2011 07:13:05AM 0 points [-]

If you count a subgoal as a type of goal then my fix still works well.

Comment author: RomeoStevens 24 November 2011 01:57:39AM *  0 points [-]

perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.

This doesn't give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can't seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.