mathemajician comments on Stupid Questions Open Thread - Less Wrong

42 Post author: Costanza 29 December 2011 11:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (265)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 30 December 2011 03:39:55AM *  8 points [-]

Optimization power is a processes' capacity for reshaping the world according to its preferences.

Intelligence is optimization power divided by the resources used.

"Intelligence" is also sometimes used to talk about whatever is being measured by popular tests of "intelligence," like IQ tests.

Rationality refers to both epistemic and instrumental rationality: the craft of obtaining true beliefs and of achieving one's goals. Also known as systematized winning.

Comment author: mathemajician 30 December 2011 12:22:22PM 8 points [-]

If I had a moderately powerful AI and figured out that I could double its optimisation power by tripling its resources, my improved AI would actually be less intelligent? What if I repeat this process a number of times; I could end up an AI that had enough optimisation power to take over the world, and yet its intelligence would be extremely low.

Comment author: benelliott 30 December 2011 12:32:13PM 0 points [-]

We don't actually have units of 'resources' or optimization power, but I think the idea would be that any non-stupid agent should at least triple its optimization power when you triple its resources, and possibly more. As a general rule, if I have three times as much stuff as I used to have, I can at the very least do what I was already doing but three times simultaneously, and hopefully pool my resources and do something even better.

Comment author: timtyler 30 December 2011 01:47:24PM *  3 points [-]

We don't actually have units of 'resources' or optimization power [...]

For "optimization power", we do now have some fairly reasonable tests:

Comment author: mathemajician 30 December 2011 03:24:46PM 2 points [-]

Machine learning and AI algorithms typically display the opposite of this, i.e. sub-linear scaling. In many cases there are hard mathematical results that show that this cannot be improved to linear, let alone super-linear.

This suggest that if a singularity were to occur, we might be faced with an intelligence implosion rather than explosion.

Comment author: faul_sname 31 December 2011 12:01:23AM 0 points [-]

If intelligence=optimization power/resources used, this might well be the case. Nonetheless, this "intelligence implosion" would still involve entities with increasing resources and thus increasing optimization power. A stupid agent with a lot of optimization power (Clippy) is still dangerous.

Comment author: mathemajician 31 December 2011 01:06:48AM 3 points [-]

I agree that it would be dangerous.

What I'm arguing is that dividing by resource consumption is an odd way to define intelligence. For example, under this definition is a mouse more intelligent than an ant? Clearly a mouse has much more optimisation power, but it also has a vastly larger brain. So once you divide out the resource difference, maybe ants are more intelligent than mice? It's not at all clear. That this could even be a possibility runs strongly counter to the everyday meaning of intelligence, as well as definitions given by psychologists (as Tim Tyler pointed out above).