Vladimir_Nesov comments on FAI FAQ draft: general intelligence and greater-than-human intelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
One flaw in this phrasing is that an agent exists in a single world, and pursues a single goal, so it's more about being able to solve unexpected subproblems.
You could consider other possible worlds and other possible goals and see if the agent could also achieve those.
If you count a subgoal as a type of goal then my fix still works well.
perhaps: given a poorly defined domain construct a decision theory that is as close to optimal (given the goal of some future sensory inputs) as your sensory information about the domain allows.
This doesn't give one a rigorous way to quantify intelligence but does allow us to qualify it (ordinal scale) by making statements about how close or far away various decisions are from optimal. Otherwise I can't seem to fold decisions about how much time to spend trying to more rigorously define the domain into the general definition.