I'm writing this because, for a while, I have noticed that I am confused: particularly about what people mean when they say someone is intelligent. I'm more interested in a discussion here than actually making a formal case, so please excuse my lack of actual citations. I'm also trying to articulate my own confusion to myself as well as everyone else, so this will not be as focused as it could be.

If I had to point to a starting point for this state, I'd say it was in psych class, where we talked about research presented by Eyesenck and Gladwell. Eyesenck is very clear to define intelligence as the ability to solve abstract problems, but not necessarily the motivation . In many ways, this matches Yudkowsky's definition, where he talks about intelligence as a property we can ascribe to an entity, which lets us predict that the entity will be able to complete a task, without ourselves necessarily understanding the steps toward completion.

The central theme I'm confused about is the generality of the concept: are we really saying that there is a general algorithm or class of algorithms that will solve most or all problems to within a given distance from optimum?

Let me give an example. Depending on what test you use, an autistic can look clinically retarded, but with 'islands' of remarkable ability, even up to genius levels. The classic example is “Rain Man,” who is depicted as easily solving numerical problems most people don't even understand, but having trouble tying his shoes. This is usually an exaggeration (by no means are all autistics savants), and these island skills are hardly limited to math. The interesting point, though, is that even someone with many such islands can have an abysmally low overall IQ.

Some tests correct for this – Raven's Pattern matching test, for instance, gives you increasingly complex patterns that you have to complete – and this tends to level out those islands, and give an overall score that seems commensurate with the sheer genius that can be found in some areas.

What I find confusing is why we're correcting this at all. Certainly, we know that some people, given a task, can complete that task, and of course, depending on the person, this task can be unfathomably complex. But do we really have the evidence to say that, in general, this task does not depend on the person as well? Or, more specifically, on the algorithms they're running? Is it reasonable to say that a person runs an algorithm that will solve all problems within an efficiency x (with respect to processing time and optimality of the solution)? Or should we be looking closer for islands in neurological baselines as well?

Certainly, we could change the question and ask how efficient are all the algorithms the person is running, and from that, we could give an average efficiency, which might serve as a decent rough estimate for the efficiency with which a person will solve a problem. And for some uses, this is exactly the information we're looking for, and that's fine. But, as a general property of the people we're studying, it seems like the measure is insufficient.

If we're trying to predict specific behavior, it seems like it would be useful to be aware of whatever 'islands' exist – for instance, the common separation between algebraic and geometric approaches to math. In my experience, using geometric explanations to someone with an algebraic approach may not be at all successful, but this is not predictive of what we might think of as the person's a priori probability of solving the problem: occasionally they seem to solve the problem with no more than a few algebraic hints. Of course, this is hardly hard evidence, but I think it points to what I'm getting at.

Looking at the specific algorithm that's being used (or perhaps, the class of algorithm?) can be considerably more predictive of the outcome. Actually, I can't really say that, either: looking at what could be a distinct algorithm can be considerably more predictive of the outcome. There are numerous explanations for these observations, one of which is of course that these are all the same algorithm, just trained on different inputs, and perhaps even constrained or aided by changes in the local neural architecture (as some studies on neurological correlates of autism might suggest). But computational power alone seems insufficient if we're going to explain phenomena like the autistic 'islands'. A savant doesn't want for computational power – but in some areas, they can want for intelligence.

Here's where I start getting confused: the research I've seen assumes intelligence is a single trait which could be genetically, epigenetically, or culturally transmitted. When correlates of intelligence are looked for, from what I've seen, the correlates are for the 'average' intelligence score, and largely disregard the 'islands' of ability. As I've said, this can be useful, but it seems like answering some of these questions would be useful for a more general understanding of intelligence, especially going into the neurological side of things, whether that's in wetware or hardware.

Then again, there's a good chance I'm missing something: in which case, I'd appreciate some help updating my priors.

New Comment
13 comments, sorted by Click to highlight new comments since:
[-]badger240

This sounds like a map/territory confusion. "Intelligence" is a concept in the map, used to summarize the common correlations in success across domains. There is no assumption that fully general cross-domain optimizers exist; it's an empirical observation that most of the variance in performance across cognitive tasks happens along a single dimension). Contrast this with personality, where most of the variance is along five dimensions. We could talk about how each person reacts in each possible situation or "island", but most of this information can be compressed into five numbers.

We could always drill down and talk about more factors, ie fluid vs crystallized intelligence or math vs verbal. More factors gives us more predictive power, though additional factors are increasingly less useful when chosen well.

Though a single-factor model works well for humans, this isn't necessarily the case for more general minds. I suspect the broad concept of intelligence carves reality at its joints fairly well, but assuming so would be a mistake.

[-]satt100

Contrast this with personality, where most of the variance is along five dimensions.

Incidentally, some psychologists have recently suggested that there's a general personality factor too!

Huh, intriguing.

For everyone else, the general factor accounts for 45% of the variance, right about the amount g does on IQ tests. The factor seems to be roughly whether you have a positive or negative personality, tracking whether you are emotionally stable, extroverted, agreeable, conscientious, and open (in order of importance) or not.

Two factor models have also been suggested, tracking plasticity (extroversion and openness) and stability (stability, agreeableness, conscientiousness), which accounts for ~80% of variance.

Thanks for this! I've really found it helpful.

I suppose part of my confusion came from reading in Eyesenck about the alarmingly large number of geniuses that scored as prodigies, but over a longitudinal study, ended up living unhappy lives in janitor-level jobs. Eyesenck deals with this by discussing correlations between intelligence and some more negative personality traits, but I would have expected great enough intelligence to invent routines to compensate for that. In any case, I think this points to my further being confused about how 'success' was being defined.

I'm also puzzled at the apparent disconnect between solving problems in one's own life and solving problems on paper.

Is it reasonable to say that a person runs an algorithm that will solve all problems within an efficiency x (with respect to processing time and optimality of the solution)?

No.

As mentioned elsewhere, it turns out that for the human population, any reasonable test of intelligence you can come up with will correlate with other reasonable tests of intelligence. The correlation isn't perfect, of course.

A basic sketch of what that looks like on the hardware level is that different regions of the brain do different things- if you want to recognize faces without your face recognition module, you're going to have a bad time. But the underlying pieces that the modules are made out of- neurons and glial cells and so on- are the same sorts of cells. So if I have a mutation that makes my glial cells more effective than normal, then every module is going to run faster/better than the normal person. But another mutation (or environmental factor or so on) might only improve one section of my brain, or might favor one region at the expense of another.

So general intelligence appears to be a thing in humans, just like CPU speed is a thing in computer hardware. But CPU speed isn't the only story in how quickly your program runs- and looking for synthetic intelligence in better glial cells won't do you any good if you don't have the rest of the brain built!

As mentioned elsewhere, it turns out that for the human population, any reasonable test of intelligence you can come up with will correlate with other reasonable tests of intelligence.

Well... If it didn't, it wouldn't be a reasonable test of intelligence, after all.

Now, suppose you had a mutation which caused your brain cells to act in a way that was different- more effective in some cases, but less effective in other cases.

For example, I suppose that a change that made neurons release more ions faster when triggered could develop more responsive brains with faster reflexes, but the longer time to clear the chemistry and return a neuron to the initial state might impede other forms of thought.

Now suppose that the number of some organelle in the stem cell that becomes the brain determines the neuron response rate, and that there are many other similar factors which effect differences in the physical development of the brain but are substantially more random than genetic factors. Further grant that such variation is pro-survival.

Now, reasonable intelligence tests test only for things which have already been experienced: It is reasonable that a group of some people who perform poorly on every task ever attempted by a human brain might excel at a task which has not yet been attempted. "Reasonable intelligence tests" correlate well because they test the same things as each other, because they cannot test the unknown.

According to http://lesswrong.com/lw/d27/neuroscience_basics_for_lesswrongians/ you're at least partially correct. People don't seem to actually have generalized intelligence hardware that controls everything, rather they have specialized regions of the brain that excel at certain tasks. We might have generalized hardware in addition to our specialized regions, but our specialized regions still control a lot.

But this is not the same thing as it being that intelligence MUST only exist in specific islands, or that general intelligence couldn't exist, or doesn't exist in the abstract. I think that pattern recognition in general is a skill very useful in lots of domains. Additionally, there's no way that humans have evolved separate modules for each of these individual tasks: inventing firearms, doing mathematics, tying knots, solving jigsaw puzzles, writing poetry. So it's probably safe to believe that general intelligence is a thing that exists, in some sort of form. Or, at least, that some of our specialized regions have a lot of versatility.

The primary problem with looking at multiple measures of intelligence is that multiple measures require either a larger effect size for each measure or larger datasets. The effect sizes are already small in most cases, and gathering data is expensive. As far as I can tell, the common assumption that intelligence is a single scalar is primarily an assumption we make because it's convenient and we have no reason to expect it to be wrong in any particular direction.

The single scalar approach isn't a perfect model, but it's good enough for most purposes and it's expensive to fix.

(By the way, you failed to close your first parentheses. (relevant xkcd)

[-]Cyan00

Ironically, you also failed to close your first parentheses. (Was that deliberate? (Mine is.)

It was. Let's end this now.)))