I think you are approximately right here, but it's important to think about just how high that upper bound is, and what activities can only be accomplished by people above that bound. It might help to think in more concrete terms about what someone who believes in religion cannot achieve, that a non-believer can.
With sufficient compartmentalization of religious beliefs, I would venture to say the answer is a pretty small subset of activities. They may be important activities on a global scale, but mostly unimportant in peoples' day to day functioning.
It's very easy to imagine, or better yet, meet, theists who are far more rational in achieving their goals than even many of the people on this board.
Bobby Fischer, and a chess playing computer, highlight the difference between rationality and talent. Talent is simply the ability to do a particular task well. I tend to think of rationality as the ability to successfully apply one's talents to achieving one's reasonably complex goals. ("Reasonably complex" so the computer doesn't score very high on rationality for achieving it's one goal of winning chess games.)
Someone with limited talent could still be rational if he was making the best use of what strengths he did have. In a very real sense, we are all in that situation. It's easy to imagine possessing particular talents that would make achieving our goals much more likely.
That said, certain talents will be correlated with rationality and it's an interesting question to see to what extent chess is one of those talents.
I first learned how to touch type on Dvorak, but switched to qwerty when I went to college so I wouldn't have issues using other computers. I found that I could not maintain proficiency with both layouts. One skill just clobbered the other.
By interests, I mean concerns related to fulfilling values. For the time being, I consider human minds to be the only entities complex enough to have values. For example, it is very useful to model a cancer cell as having the goal of replicating, but I don't consider it to have replicating as a value.
The cancer example also shows that our own cells don't fulfill or share our values, and yet we still model the consumption of cancer cells as the consumption of a human being.
If you really want to ignore direct consumption by machines - and pretend that the machines are all working exclusively for humans, doing our bidding precisely - then you have GOT to account for people and companies buying things for the machines that they manange - or your model badly loses touch with reality.
I think I might have the biggest issue with this line. Nobody is pretending that machines are all working exclusively for humans, no more than we pretend our cells are working exclusively for us. The idea is that we account for the machine consumption the same way we account for the consumption of our own cells, by attributing it to the human consumers.
Psychosurgery or pharmaceutical intervention to encourage some of the more positive autistic spectrum cognitive traits seems more likely to work than this. We are far from identifying the genetic basis of intelligence or exceptional intelligence, never mind an aspect as specific as rationality.
It's also not clear that it is in someone's self interest to do this. I know you said retroviral genetic engineering, but for now I'll assume that it would only be possible on embryos. In that case, if someone really wanted grand children, it is not clear that making these alterations in her children would be the best way to achieve that goal.
Would this analysis apply to the ecosystem as a whole? Should we think of fungus as consuming low entropy plant waste and spitting out higher entropy waste products? Is a squirrel eating an acorn part of the economy?
Machines, as they currently exists, have no interests of their own. Any "interests" they may appear to have are as real as the "interest" gas molecules have in occupying a larger volume when the temperature increases. Computer viruses are simply a way that machines malfunction. The fact that machines are not exclusively on our side simply means that they do not perfectly fulfill our values. Nothing does.
So, if say a million people owned all of the machines in the world, and they had no use for the human labor of the other billions of people in the world, you would still classify the economy as very effective?
I guess the question is what counts as an economic crash? A million extremely well off people with machines to tend to their every need and billions with no useful skills to acquire capital seems like a crash to most of the people involved.
At this point I was mostly wondering if there were any motivating anecdotes such as Phineas Gage or gourmand syndrome, except with a noticeable personality change towards rationality. Someone changing his political orientation, becoming less superstitious, or gambling less as a result of an injury could be useful (and, as a caveat, all could be caused by damage that has nothing to do with rationality).
I realize that "brain module" != "distinct patch of cortex real estate", but have there been any cases of brain damage that have increased a person's rationality in some areas?
I am aware that depression and certain autism spectrum traits have this property, but I'm curious if physical trauma has done anything similar.
One should also know everything, but clearly that's impossible.
There are some areas of knowledge that are so unlikely to yield anything useful that it's not worth spending any time being curious about them. For humanity in general, psi phenomena now fall into this category. There was a time when they didn't, but it's safe to say that time is over. For me as an individual, string theory falls into that category. I'm glad there are some people investigating it, but the effort required for me to have anything but a superficial understanding of the topic is extremely unlikely to help me achieve anything.