Or to ask the question another way, is there such a thing as a theory of bounded rationality, and if so, is it the same thing as a theory of general intelligence?

The LW Wiki defines general intelligence as "ability to efficiently achieve goals in a wide range of domains", while instrumental rationality is defined as "the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences". These definitions seem to suggest that rationality and intelligence are fundamentally the same concept.

However, rationality and AI have separate research communities. This seems to be mainly for historical reasons, because people studying rationality started with theories of unbounded rationality (i.e., with logical omniscience or access to unlimited computing resources), whereas AI researchers started off trying to achieve modest goals in narrow domains with very limited computing resources. However rationality researchers are trying to find theories of bounded rationality, while people working on AI are trying to achieve more general goals with access to greater amounts of computing power, so the distinction may disappear if the two sides end up meeting in the middle.

We also distinguish between rationality and intelligence when talking about humans. I understand the former as the ability of someone to overcome various biases, which seems to consist of a set of skills that can be learned, while the latter is a kind of mental firepower measured by IQ tests. This seems to suggest another possibility. Maybe (as Robin Hanson recently argued on his blog) there is no such thing as a simple theory of how to optimally achieve arbitrary goals using limited computing power. In this view, general intelligence requires cooperation between many specialized modules containing domain specific knowledge, so "rationality" would just be one module amongst many, which tries to find and correct systematic deviations from ideal (unbounded) rationality caused by the other modules.

I was more confused when I started writing this post, but now I seem to have largely answered my own question (modulo the uncertainty about the nature of intelligence mentioned above). However I'm still interested to know how others would answer it. Do we have the same understanding of what "rationality" and "intelligence" mean, and know what distinction someone is trying to draw when they use one of these words instead of the other?

ETA: To clarify, I'm asking about the difference between general intelligence and rationality as theoretical concepts that apply to all agents. Human rationality vs intelligence may give us a clue to that answer, but isn't the main thing that I'm interested here.

New to LessWrong?

New Comment
52 comments, sorted by Click to highlight new comments since: Today at 7:29 AM

From what I've seen in terms of word use, the "rationality" is mentioned when picking one of several explicitly listed choices. Whereas behaviours commonly described as "intelligent" usually involve actions from a very large number of potential choices. Hardly anyone says it is irrational to not invent something (but it certainly is unintelligent to be unable to invent something simple).

A lot of IQ tests are multiple choice (i.e., choose from a small number of possible answers for each question), but people complain that they don't measure rationality..

Picture a mind that has much cloudier memory of what happened beyond 5..10 minutes ago, starting at the age of 20. This wouldn't impact the IQ score much, will impact intelligence on tasks that you have to think about for more than 5..10 minutes. Plenty of things are not measured by IQ tests, things that are necessary for all sorts of problem solving. From what I gather, Stanovich is quite open to there being a wide array of intelligence traits not measured by IQ tests; it's the people who sell a rationality enhancement that doesn't affect IQ test score, who need the rationality to reside entirely within the non measured traits (rather than only partially).

Consider those 'pick a correct picture to continue the sequence' tests. They're essentially testing your ability to assess complexity (and assign priors based on complexity), a traditionally 'rational' thing. As well, they're impacted by your tendency to think hard even when encountering an informal problem. Of course, the correlation is not going to be perfect, but there's going to be a correlation.

Actually, thinking more about the word use... it seems to me that among the multiple choice situations, there's a certain tendency in certain circles to use the word 'intelligence' when the answer is demonstrably correct, and the word 'rationality' when you couldn't properly demonstrate anything.

A collision of overconfidence with reality can yield either a reduction in overconfidence, or reduction in overconfidence's scope to answers that can't be checked...

[-][anonymous]10y90

Speaking as someone who is far more intelligent than rational I think it boils down to system 1 vs system 2. Intelligence then is how fast your system 2 thinking is and how many constraints you can keep in your head while thinking things through. Rationality on the other hand is how good your system 1 is in knowing when to defer to system 2 (noticing your confusion) and how easily the results of your system 2 thinking get integrated into system 1. Both things in which I regularly fail even though when I deliberately focus on something I can usually think more complex things through than most other people.

I would rather argue Rationality is the extant to which the software your system 2 is running is epistemologically sound.

[-][anonymous]10y00

That's certainly part of it. I still remember using my intelligence to defend my religious believes. However, the best epistemology is useless unless you act on it.

I believe Wei is specifically talking about epistemic rationality, though.

[-][anonymous]10y00

You're right. I was missing the point.

System 2 can be much crazier than System 1, so it's not just about integration between them, content also matters a great deal.

I like your description. If you take a hypothetical AI which does not have the System 1/System 2 partition humans do, there is no difference between intelligence and rationality, only the overall optimization power.

Using a car analogy, I would say that intelligence is how strong your engine is. Whereas rationality is driving in a way where you get to your destination efficiently and alive. Someone can have a car with a really powerful engine, but they might drive recklessly or only have the huge engine for signalling purposes while not actually using their car to get to a particular destination.

I don't know if this analogy has been used before but how about: "Intelligence is firepower, rationality is aim." (And the information you have to draw from is ammunition maybe?)

You can draw parallels in terms of precision and consistency, systematically over/undershooting, and it works well with the expression "blowing your foot off"

I like the mechanical analogy, here's a slightly different version. IQ is like the horsepower/torque of an engine. You might have a really fast engine but it has to be hooked up to something or it will just sit there spinning really fast making lots of noise. Rationality is learning about all the things an engine can be used to do. There are all sorts of useful modules that you didn't know existed. An engine can run anything from a car, to a textile factory, you just have to have the right modules hooked up.

Now bring it back from the analogy. Literally every single thing in human civilization is run off the same engine, the human brain. They just have different modules hooked up to them. Some modules are complex and take years to learn. Some are so complex no one is really sure how they work. Rationality training is acknowledging that exploring the space of possible modules and figuring out how to hook them up in general is probably powerful, if there is sufficient overlap between domains.

or only have the huge engine for signalling purposes

In other words, it's not how big it is, it's how you use it that matters?

X-D

I think I agree with this and would frame it as: intelligence is for solving problems, and rationality is for making decisions.

I'd say the relationship between intelligence and rationality is akin to the one between muscles and gymnastics.

[-][anonymous]10y70

That sounds wise, but I'm having trouble understanding what it is you are actually saying. How exactly are you defining intelligence and rationality here? Wei Dai gave definitions with demonstrable overlap; you claim they are different. How?

Intelligence/muscles = a natural faculty every human is born with, in greater or lesser degree.

Rationality/gymastics = a teachable set of techniques that help you refine what you can do with said natural faculty.

Probably this explains why the distinction between intelligence and rationality makes sense for humans (where some skills are innate, and some skills are learned), but doesn't necessarily make sense for self-improving AIs.

Intelligence is about our biological limits, which determine how much optimizing power we can produce in short term (on the scale of seconds), which is more or less fixed. Rationality is about using this optimizing power over long term. Intelligence is how much "mental energy" you can generate per second. Rationality is how you use this energy, if you are able to accumulate it, etc.

Seems like in humans, most of this generated energy is wasted, so there can be a great difference between how much "mental energy" you can generate per second, and whether you can accumulate enough "mental energy" to reach your life goals. (Known as: "if you are so smart, why aren't you rich?") A hypothetical perfect Bayesian machine could use all the "mental energy" efficiently, so there would be some equation connecting its intelligence and rationality.

However, rationality and AI have separate research communities.

There is a big difference between increasing the rationality/intelligence in a human and building a rational/intelligent agent from the ground up.

Perhaps (and I'm just thinking off the cuff here) rationality is just the subset of general intelligence that you might call meta-intelligence - ie, the skill of intelligently using your first-order intelligence to best achieve your ends.

Have you looked at any of Stanovich's work? Rationality and the Reflective Mind is a good book, albeit not on Libgen yet.

I'm aware of some of his work, but haven't looked into it deeply. I added a section to the OP (quoted below) to clarify what question I'm most interested in. Do you think Stanovich's work helps much in answering it?

To clarify, I'm asking about the difference between general intelligence and rationality as theoretical concepts that apply to all agents. Human rationality vs intelligence may give us a clue to that answer, but isn't the main thing that I'm interested here.

I think Stanovich gives a persuasive model which, while complex, seems to explain the various results in a more sophisticated way than just System I vs System II. I'm not sure to what extent you could argue that his breakdown is universal, but I think it's useful to ask why his model of intelligence-as-modeling sometimes invoked to solve problems with specific pieces of knowledge allowing solution of particularly tricky problems (to simplify his scheme) would not be universal. It may be to get an efficient agent, you do need to have a lot of hardwired frugal processing (System I) which occasionally punts to more detailed simulation/modeling cognition (System II) which is augmented by general heuristics/theories/pieces of knowledge. (I seem to recall reading a paper from the Schmidhuber lab where they found that one of their universal agents ran much better when they encoded some relevant knowledge into its program, but I can't refind it. It might be their Optimal Ordered Problem Solver.)

Although acknowledging the WIKI definitions, in order to properly distinguish between the two, I deem it useful to analyze the etymology of the terms. Nothing new, after all. Just modern. Let us not forget our roots...

Interestingly, the two concepts originate from opposites.

"INTELLIGENCE" comes from the latin word INTELLIGENTIA - INTELLIGENTIAE, out of the latin verb INTELLIGERE, wich derives out of INTUS (within) or INTRA (inside) and LEGERE (choose).

"RATIONALITY" comes from the latin word RATIO - RATIONIS (calculation, reason, advantage, part), same root as in RATION, RATIO (relationship or quotient), RATIONALE or RATE...

In both cases, the aim of a higher self-awareness obviously "lurks upon the waters"...

But the term intelligence appears to subtend to the idea of SELECTING, whereas the term rationality appears to imply the concept of COMPARING.

In the former case, of course, one must first separate objects that she already has knowledge of. In the latter, one must first put together objects that she has no knowledge of.

XO

[-][anonymous]10y00

To quote Karl Popper: 'all life is problem solving.' Agency is attempted problem solving. A statement not unlike the statement that rationality (attempted problem solving) and intelligence (agency) are the same.

Attempted problem solving comes in many forms. One of them is tradition, one of them is randomness, one of them is violence, etc. They do or don't solve the problem the agent sets for him or herself. The problem solving method that includes itself - that includes 'did my solution lead to a solved problem' as well as 'did my problem get solved' - is science. Some things science does very well. Same for tradition, randomness, violence etc.

There are important differences between "ability to efficiently achieve goals" and attempts to efficiently achieve goals. The former excludes all except success, and success is only success until the next success. I side with the latter and say a failed attempt can come from and add to intelligence. It's the difference between being more right and being less wrong.

In some sense I think General Intelligence may contain Rationality. We're just playing definition games here, but I think my definitions match the general LW/Rationality Community usage.

A an agent which perfectly plays a solved game ( http://en.wikipedia.org/wiki/Solved_game ) is perfectly rational. But its intelligence is limited, because it can only accept a limited type of inputs, the states of a tic-tac-toe board, for instance.

We can certainly point to people who are extremely intelligent but quite irrational in some respects--but if you increased their rationality without making any other changes I think we would also say that they became more intelligent. If you examine their actions, you should expect to see that they are acting rationally in most areas, but have some spheres where rationality fails them.

This is because, in my definition at least:

Intelligence = Rationality + Other Stuff

So rationality is one component of a larger concept of Intelligence.

General Intelligence is the ability of an agent to take inputs from the world, compare it to a preferred state of the world (goals), and take actions that make that state of the world more likely to occur.

Rationality is how accurate and precise that agent is, relative to its goals and resources.

General Intelligence includes this, but also has concerns such as

  • being able to accept a wide variety of inputs
  • having lots of processing power
  • using that processing power efficiently

I don't know if this covers it 100%, but this seems like it matches general usage to me.

Some off the cuff thoughts:

Can you imagine an intelligent agent that is not rational? And vice versa, can you imagine a rational agent that is not intelligent?

AIXI is "rational" (believe that it's vNM-rational in the literature). Is "instrumental rationality" a superset of this definition?

In the case of human rationality and human intelligence, part of it seems a question of scale. E.g. IQ tests seem to measure low level pattern matching, while "rationality" in the sense of Stanovich refers to more of a larger scale self reflective corrective process. (I'd conjecture that there are a lot of low level self reflective corrective processes occurring in an IQ test as well).

Intelligence is INT while rationality is WIS.

WIS is more how good your inbuilt heuristics are, which is not quite the same as the way "rationality" is used around here.

Ask five gamers what WIS means, get five answers.

It is said by some that intelligence is the capability to adapt, to change, while rationality is doing the most logical thing. For instance: a computer that makes investments by itself - a rational computer, using logic to maximize profits. A computer that invents something - a smart, intelligent computer, that has developed some level of intelligence and is using that to chang its environment instead of merely working on it.

Intelligence is how efficient and effective you can model the real world or a problem.

Rationality is the ability to overcome biases and apply that model that is of sufficient calibration and credence, to generate the most expected value.

[-][anonymous]10y00

I think you are confusing unqualified 'intelligence' with 'general intelligence'. There is no factor X for which you can just multiply the computational power of Deep Blue and end up with a driverless car. The ability to adapt to entirely novel domains requires a think-about-thinking ability which Wei Dai identifies as being perhaps identical to bounded rationality.

The first definition implies flexibility, the second doesn't. Non human animals and machines are often much more efficient than humans at achieving their goals, but are not counted as more intelligent because their goals are fixed. A rock is highly efficient at staying in the same place.

Rationality = having accurate beliefs and useful emotions.

Intelligence = the capacity to learn.

Rationality = having accurate beliefs and useful emotions

Not only. A large part of it is having a useful toolset and knowing which tools to apply when. Accurate beliefs are the result of epistemic rationality, but not the process itself.

Intelligence = the capacity to learn

Not only. I'd say it's mostly about the ability to process complex information rapidly and correctly. Smart people think fast and think clearly. Plus, a significant part of intelligence is how large/complicated a structure can you hold and manipulate in your mind.

Intelligence these days usually means IQ. If you run a bunch of different mental tests and run principle component analysis on the result the largest of the factors that comes out is supposed to be "g" and IQ measures how you score in g in relation to other people in your society.

Hopefully once we have real rationality tests and run a bunch of rationality tests plus a bunch of IQ tests we can also run principle component analysis on the results and get a value for rationality that distinct from IQ.

once we have real rationality tests

What is a "real rationality test"? A successful startup? Hunger Games? Whoever dies with the most toys wins?

As far as I understand there are currently people associated with CFAR who try to build a rationality test. Maybe the test while successfully measure a new thing that's distinct from IQ, When it does it's useful to call it can be useful to call that rationality.

We had that debate a while ago, but to repeat it, I might use the word "real" in a sense that you aren't used to. Here I'm a constructivist. If you can construct a new rationality test that measures something new that distinct from IQ/g if you run PCA then you are free to call that new variable "real rationality".

If you can construct a new rationality test that measures something new that distinct from IQ/g if you run PCA then you are free to call that new variable "real rationality"

There are a lot of tests which output something distinct from g. The real (ahem) question is what the variable measured means, or, more specifically, what directly observable characteristics does it correlate with.

For IQ there are a lot of studies showing how the number which comes out of a test correlates with a variety of interesting things (like wealth, health, educational achievements, etc.).

Let's say you designed a test and it outputs a number. What does this number have to correlate with in order for you to claim it's a measure of rationality?

There are a lot of tests which output something distinct from g.

Emotional intelligence would be one example. It turns out having a notion of emotional intelligence is useful. If you take Gardner's 7 intelligence model it turns out that those 7 values don't produce distinct factors in PCA.

The name of the game is finding a new value that robust if you change the test around and that's orthogonal to established psychometric measurements.

That new value might be something that works for various different tests of immunity to mental bias.

Ideally you find something that isn't simple "a EQ + b IQ" but that's really orthogonal. Then you can study whether your new measurement is useful to predict things like wealth or educational achievement and whether a linear model that has information about IQ and your new measurement of rationality does better predictions of educational achievement than just a linear model that has information about IQ. At that time you see what the measurement can really do in reality and see whether it's useful.

I don't think that you can say beforehand what success will look like. It's a lot about trying something and seeing whether we get a new number that useful and that bears some relationship to what we call rationality.

I'm pretty sure analysis found that EQ was fully explained by "a IQ + b Openness".

Could you link to such an analysis? It would surprise me.

Didn't have a particular source in mind, was going off memory.

Looks like there's some debate over whether it has predictive power, but consensus is that EQ is a collection of mostly unrelated traits, and is heavily entangled with the big five, particularly neuroticism and openness, in an overlapping way. (My memory overstated the case somewhat.) This looks like a relatively representative study, and here are the abstract and docx of a study which concluded that EQ had no meaningful predictive power.

As far as the first study goes, I don't see why we should control for income and marital status. If EQ increases income in a way that increases life satisfaction then EQ is a highly useful construct.

That said there are political problems with treating openness as a variable to be maximized. Openness correlates with voting left in US elections. Teaching people to increase their abilities of emotional management might be politically easier to communicate.

A lot of personality tests are also easy to game if a person wants to score highly. The notion of intelligence supposes that getting high values in the test needs actual skill.

that's orthogonal to established psychometric measurements

That would be the second PCA component :-D

I don't think that you can say beforehand what success will look like.

I am not asking what success will look like. I am asking what metrics will you be using to decide if something is successful or not.

I am not asking what success will look like. I am asking what metrics will you be using to decide if something is successful or not.

I don't think it's useful to decide beforehand on which metric to use when doing exploratory research.

I don't think it's useful to decide beforehand on which metric to use when doing exploratory research.

'Cheshire Puss,' she began, rather timidly, as she did not at all know whether it would like the name: however, it only grinned a little wider. 'Come, it's pleased so far,' thought Alice, and she went on. 'Would you tell me, please, which way I ought to go from here?'

'That depends a good deal on where you want to get to,' said the Cat.

'I don't much care where--' said Alice.

'Then it doesn't matter which way you go,' said the Cat.

'--so long as I get SOMEWHERE,' Alice added as an explanation.

'Oh, you're sure to do that,' said the Cat, 'if you only walk long enough.'

You confuse metrics do decide where to look with metrics to decide whether you found something. Those two aren't the same thing.

You decide where to look based on what you know in the present but you decide whether you found something based on information that you find in the future.

It seems to me that, as you point out yourself, the concepts mean different things depending on whether you apply them to humans.

Abstract intelligence and abstract rationality are pretty much the same thing as far as I understand. The first is "ability to efficiently achieve goals in a wide range of domains", and the second one is a combination of instrumental rationality and epistemic rationality, which amounts to basically "solving problems given your information" and "acquiring information". When put together, the two types of rationality amount to "gather information about domains and achieve your goals within them", or phrased in another way "ability to efficiently achieve goals in a wide range of domains".

When applied to humans these words mean slightly different things and I think the analogies presented by the other commenters are accurate.