A lot of this seems extremely contrary to my intuitions.
Poor performance (for instance on tests) isn't the result of having a high rate of random errors, but of exhibiting repeatable bugs. This means that people with worse performance will be more predictable, not less — in order to predict the better performance, you have to actually look at the universe more, whereas to predict the worse performance you only have to look at the agent's move history.
(For that matter, we can expect this from Bayes: if you're learning poorly from your environment, you're not updating, which means you're generating behaviors based more on your priors alone.)
The fact is that idiots can often screw things up more than selfish people.
This seems to be a political tenet or tribal banner, not a self-evident fact.
(Worse, it borders on the "intelligence leads necessarily to goodness" meme, which is a serious threat to AI safety. A more intelligent agent is better equipped to achieve its goals, but is not necessarily better to have around to achieve your goals if those are not the same.)
By more predictable, I meant greater accuracy in predicting, not that less computing power is required to predict. Someone who performs well on tests is perfectly predictable: they always get the right answer. Someone with poor performance can't be any more predictable than that, and is often less.
Just because the bug model has some value doesn't mean that the error model has none. I would be surprised if a poorly performing student, given a test twice, were to give exactly the same wrong answers both times. I don't understand you claim that people with wo...
If you believe that science is about describing things mathematically, you can fall into a strange sort of trap where you come up with some numerical quantity, discover interesting facts about it, use it to analyze real-world situations - but never actually get around to measuring it. I call such things "theoretical quantities" or "fake numbers", as opposed to "measurable quantities" or "true numbers".
An example of a "true number" is mass. We can measure the mass of a person or a car, and we use these values in engineering all the time. An example of a "fake number" is utility. I've never seen a concrete utility value used anywhere, though I always hear about nice mathematical laws that it must obey.
The difference is not just about units of measurement. In economics you can see fake numbers happily coexisting with true numbers using the same units. Price is a true number measured in dollars, and you see concrete values and graphs everywhere. "Consumer surplus" is also measured in dollars, but good luck calculating the consumer surplus of a single cheeseburger, never mind drawing a graph of aggregate consumer surplus for the US! If you ask five economists to calculate it, you'll get five different indirect estimates, and it's not obvious that there's a true number to be measured in the first place.
Another example of a fake number is "complexity" or "maintainability" in software engineering. Sure, people have proposed different methods of measuring it. But if they were measuring a true number, I'd expect them to agree to the 3rd decimal place, which they don't :-) The existence of multiple measuring methods that give the same result is one of the differences between a true number and a fake one. Another sign is what happens when two of these methods disagree: do people say that they're both equally valid, or do they insist that one must be wrong and try to find the error?
It's certainly possible to improve something without measuring it. You can learn to play the piano pretty well without quantifying your progress. But we should probably try harder to find measurable components of "intelligence", "rationality", "productivity" and other such things, because we'd be better at improving them if we had true numbers in our hands.