OrphanWilde comments on General intelligence test: no domains of stupidity - Less Wrong

8 Post author: Stuart_Armstrong 21 May 2013 04:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread.

Comment author: OrphanWilde 21 May 2013 05:29:17PM 8 points [-]

Put a human being in an environment which is novel to them. Say, empiricism doesn't hold - the laws of this environment are such that "That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Is that human being going to behave "stupidly" in this environment? Do -we- fail the intelligence test? You acknowledge that we could - but if you're defining intelligence in such a way that nothing actually satisfies that definition, what the heck are you achieving, here?

I'm not sure your criteria is all that useful. (And I'm not even sure it's that well defined, actually.)

Comment author: magfrump 22 May 2013 05:12:23AM 4 points [-]

People fail at novel environments as mundane as needing to find a typo in an html file or paying attention to fact-checks during political debates. You don't have to come up with extreme philosophical counterexamples to find domains in which it's interesting to distinguish between the behavior of different non-experts (and such that these differences feel like "intelligence").

Comment author: Stuart_Armstrong 22 May 2013 03:07:31PM 2 points [-]

but if you're defining intelligence in such a way that nothing actually satisfies that definition, what the heck are you achieving, here?

The no free lunch theorems imply there's no such thing as a universal definition of general intelligence. So I think general intelligence should be a matter of degree, rather than kind.

I'm not sure your criteria is all that useful. (And I'm not even sure it's that well defined, actually.)

It's not well defined, yet. But I think there's a germ of a good idea there, that I'm teasing out with the help of commenters here.

Comment author: MugaSofer 29 May 2013 10:21:34AM 1 point [-]

Say, empiricism doesn't hold - the laws of this environment are such that "That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Then we would observe this, and update on it - after all, this mysterious law is presumably immune to itself, or it would have stopped by now,right?

Comment author: OrphanWilde 29 May 2013 01:18:28PM 1 point [-]

I'm curious to know how you expect Bayesian updates to work in a universe in which empiricism doesn't hold. (I'm not denying it's possible, I just can't figure out what information you could actually maintain about the universe.)

Comment author: Creutzer 29 May 2013 08:25:51PM *  1 point [-]

What exactly do you mean by "empiricism does not hold"? Do you mean that there are no laws governing reality? Is that even a thinkable notion? I'm not sure. Or perhaps you mean that everything is probabilistically independent from everything else. Then no update would ever change the probability distribution of any variable except the one on whose value we update, but that is something we could notice. We just couldn't make any effective predictions on that basis - and we would know that.

Comment author: MugaSofer 30 May 2013 10:19:04AM *  0 points [-]

If things have always been less likely after they happened in the past, then, conditioning on that, something happening is Bayesian evidence that it wont happen again.

Comment author: jmmcd 21 May 2013 06:47:40PM 1 point [-]

"That which has happened before is less likely to happen again" (a reference to an old Overcoming Bias post I can't locate).

Good point. In fact, that is the type of environment which is required for the No Free Lunch theorems mentioned in the post to even be relevant. A typical interpretation in the evolutionary computing field would be that it's the type of environment where an anti-GA (a genetic algorithm which selects individuals with worse fitness) does better than a GA. There are good reasons to say that such environments can't occur for important classes of problems typically tackled by EC. In the context of this post, I wonder whether such an environment is even physically realisable.

(I think a lot of people misinterpret NFL theorems.)