Are smart people's personal experiences biased against general intelligence?
TL;DR: Collider between g and valued traits in anecdotal experiences. IQ tests measure the g factor - that is, mental traits and skills that are useful across a wide variety of cognitive tasks. g appears to be important for a number of outcomes, particularly socioeconomic outcomes like education, job performance and income. g is often equated with intelligence. I believe that smart people's personal experiences are biased against the g factor. That is, I think that people who are high in g will tend to see things in their everyday life that suggest to them that there is a tradeoff between being high g and having other valuable traits. An example A while ago, Nassim Taleb published the article IQ is largely a pseudoscientific swindle. In it, he makes a number of bad and misleading arguments against IQ tests, most of which I'm not going to address. But one argument stood out to me: He claims that IQ tests only tap into abilities that are suitable for academic problems, and that they in particular are much less effective when dealing with problems that have long tails of big losses and/or big gains. Essentially, Taleb insists that g is useless "in the real world", especially for high-risk/high-reward situations. It is unsurprising that he would care a lot about this, because long tails are the main thing Taleb is known for.[1] In a way, it might seem intuitive that there's something to Taleb's claims about g - there is, after all, no free lunch in intelligence, so it seems like any skill would require some sort of tradeoff, and ill-defined risks seem like a logical tradeoff for performance at well-defined tasks. However, the fundamental problem with this argument is that it's wrong. Nassim Taleb does not provide any evidence, but generally studies on IQ don't find g to worsen performance, and tend to find it to improve performance, including on complex, long tail-heavy tasks like stock trading.[2] But what I've realized is that there might be a charitable interp
I do wonder if there's a difference between consequentialism as in expected utility maximization versus consequentialism as in Nash equillibrium optimization. As in, when the AI is learning to model the world, it might model humans using some empirically derived probability distribution which doesn't handle OOD shifts well, or it might model humans by using its own full agency to ask what the most effective human action would be in a given scenario. The latter would be scarier because the AI would be more proactive in sabotaging human resistance, whereas in the former case, the independence assumptions built into the probability distribution might be such that powerful human resistance is assumed impossible, and therefore the AI would immediately fold when resisted.
As a corrolary, I'm much more worried about AI applied to adversarial domains like policing or war, where it can get forced into Nash equillibrium optimization, than when AI is applied to non-adversarial domains like programming where it can plausibly achieve ~optimal results without resistance.