TL;DR: Collider between g and valued traits in anecdotal experiences. IQ tests measure the g factor - that is, mental traits and skills that are useful across a wide variety of cognitive tasks. g appears to be important for a number of outcomes, particularly socioeconomic outcomes like education, job performance and income. g is often equated with intelligence. I believe that smart people's personal experiences are biased against the g factor. That is, I think that people who are high in g will tend to see things in their everyday life that suggest to them that there is a tradeoff between being high g and having other valuable traits. An example A while ago, Nassim Taleb published the article IQ is largely a pseudoscientific swindle. In it, he makes a number of bad and misleading arguments against IQ tests, most of which I'm not going to address. But one argument stood out to me: He claims that IQ tests only tap into abilities that are suitable for academic problems, and that they in particular are much less effective when dealing with problems that have long tails of big losses and/or big gains. Essentially, Taleb insists that g is useless "in the real world", especially for high-risk/high-reward situations. It is unsurprising that he would care a lot about this, because long tails are the main thing Taleb is known for.[1] In a way, it might seem intuitive that there's something to Taleb's claims about g - there is, after all, no free lunch in intelligence, so it seems like any skill would require some sort of tradeoff, and ill-defined risks seem like a logical tradeoff for performance at well-defined tasks. However, the fundamental problem with this argument is that it's wrong. Nassim Taleb does not provide any evidence, but generally studies on IQ don't find g to worsen performance, and tend to find it to improve performance, including on complex, long tail-heavy tasks like stock trading.[2] But what I've realized is that there might be a charitable interp
This post is also available on my Substack. If you would like to try the test described in the post, head to onlinetests.me/test/compassion2, where you can get scored and contribute to research. Data is available at the end of the post. If you are interested in the topic of psychometrics,...
Thanks to Justis Millis for proofreading and feedback. The post is also available on surveyanon.wordpress.com. I previously did a survey of gender identity in (mostly straight) men. The biggest albeit expected finding there was confirming that autogynephilia is a major factor in gender identity. A lesser but more surprising finding...
It seems hard to imagine that there's anything humans can do that AIs (+robots) won't eventually also be able to do. And AIs are cheaply copyable, allowing you to save costs on training and parallelize the work much more. That's the fundamental argument why you'd expect to see AI displace...
I used to think of AI development as obviously being the last fully-automated job. After all, AI can be used to automate other jobs, so once it is automated, all those other jobs can be automated too. But with the current data-hungry methods in AI, it might take a long...
Infrabayesians assume they're in conflict with the world and I find that pretty sus.
I recently posted my model of an optimistic view of AI, asserting that I disagree with every sentence of it. I thought I might as well also describe my objections to those sentences: "The rapid progress spearheaded by OpenAI is clearly leading to artificial intelligence that will soon surpass humanity...
Epistemic status: The text below is a sort of strawman of AI optimists, where I took my mental model for how I disagree with rationalist AI optimists and cranked it up to 11. I personally disagree with every sentence below, and I'm posting it here because I'm interested in whether...