To quote Karl Popper: 'all life is problem solving.' Agency is attempted problem solving. A statement not unlike the statement that rationality (attempted problem solving) and intelligence (agency) are the same.
Attempted problem solving comes in many forms. One of them is tradition, one of them is randomness, one of them is violence, etc. They do or don't solve the problem the agent sets for him or herself. The problem solving method that includes itself - that includes 'did my solution lead to a solved problem' as well as 'did my problem get solved' - is science. Some things science does very well. Same for tradition, randomness, violence etc.
There are important differences between "ability to efficiently achieve goals" and attempts to efficiently achieve goals. The former excludes all except success, and success is only success until the next success. I side with the latter and say a failed attempt can come from and add to intelligence. It's the difference between being more right and being less wrong.
Or to ask the question another way, is there such a thing as a theory of bounded rationality, and if so, is it the same thing as a theory of general intelligence?
The LW Wiki defines general intelligence as "ability to efficiently achieve goals in a wide range of domains", while instrumental rationality is defined as "the art of choosing and implementing actions that steer the future toward outcomes ranked higher in one's preferences". These definitions seem to suggest that rationality and intelligence are fundamentally the same concept.
However, rationality and AI have separate research communities. This seems to be mainly for historical reasons, because people studying rationality started with theories of unbounded rationality (i.e., with logical omniscience or access to unlimited computing resources), whereas AI researchers started off trying to achieve modest goals in narrow domains with very limited computing resources. However rationality researchers are trying to find theories of bounded rationality, while people working on AI are trying to achieve more general goals with access to greater amounts of computing power, so the distinction may disappear if the two sides end up meeting in the middle.
We also distinguish between rationality and intelligence when talking about humans. I understand the former as the ability of someone to overcome various biases, which seems to consist of a set of skills that can be learned, while the latter is a kind of mental firepower measured by IQ tests. This seems to suggest another possibility. Maybe (as Robin Hanson recently argued on his blog) there is no such thing as a simple theory of how to optimally achieve arbitrary goals using limited computing power. In this view, general intelligence requires cooperation between many specialized modules containing domain specific knowledge, so "rationality" would just be one module amongst many, which tries to find and correct systematic deviations from ideal (unbounded) rationality caused by the other modules.
I was more confused when I started writing this post, but now I seem to have largely answered my own question (modulo the uncertainty about the nature of intelligence mentioned above). However I'm still interested to know how others would answer it. Do we have the same understanding of what "rationality" and "intelligence" mean, and know what distinction someone is trying to draw when they use one of these words instead of the other?
ETA: To clarify, I'm asking about the difference between general intelligence and rationality as theoretical concepts that apply to all agents. Human rationality vs intelligence may give us a clue to that answer, but isn't the main thing that I'm interested here.