localdeity

Wiki Contributions

Comments

Sorted by

Smart people are often too arrogant and proud, and know too much.

I thought that might be the case.  If you looked at GPT-3 or 3.5, then, the higher the quality of your own work, the less helpful (and, potentially, the more destructive and disruptive) it is to substitute in the LLM's work; so higher IQ in these early years of LLMs may correlate with dismissing them and having little experience using them.

But this is a temporary effect.  Those who initially dismissed LLMs will eventually come round; and, among younger people, especially as LLMs get better, higher-IQ people who try LLMs for the first time will find them worthwhile and use them just as much as their peers.  And if you have two people who have both spent N hours using the same LLM for the same purposes, higher IQ will help, all else being equal.

Of course, if you're simply reporting a correlation you observe, then all else is likely not equal.  Please think about selection effects, such as those described here.

Using LLMs is an intellectual skill.  I would be astonished if IQ was not pretty helpful for that.

For editing adults, it is a good point that lots of them might find a personality tweak very useful, and e.g. if it gave them a big bump in motivation, that would likely be worth more than, say, 5-10 IQ points.  An adult is in a good position to tell what's the delta between their current personality and what might be ideal for their situation.

Deliberately tweaking personality does raise some "dual use" issues.  Is there a set of genes that makes someone very unlikely to leave their abusive cult, or makes them loyal obedient citizens to their tyrannical government, or makes them never join the hated outgroup political party?  I would be pretty on board with a norm of not doing research into that.  Basic "Are there genes that cause personality disorders that ~everyone agrees are bad?" is fine; "motivation" as one undifferentiated category seems fine; Big 5 traits ... have some known correlations with political alignment, which brings it into territory I'm not very comfortable with, but if it goes no farther that it might be fine.

On a quick skim, an element that seems to be missing is that having emotions which cause you to behave 'irrationally' can in fact be beneficial from a rational perspective.

For example, if everyone knows that, when someone does you a favor, you'll feel obligated to find some way to repay them, and when someone injures you, you'll feel driven to inflict vengeance upon them even at great cost to yourself—if everyone knows this about you, then they'll be more likely to do you favors and less likely to injure you, and your expected payoffs are probably higher than if you were 100% "rational" and everyone knew it.  I believe this is in fact why we have the emotions of gratitude and anger, and I think various animals have something resembling them.  Put it this way: carrying out threats and promises is "irrational" by definition, but making your brain into a thing that will carry out threats and promises may be very rational.

So you could call these emotions "irrational" or the thoughts they lead to "biased", but I think that (a) likely pushes your thinking in the wrong direction in general, and (b) gives you no guidance on what "irrational" emotions are likely to exist.

What is categorized as "peer pressure" here?  Explicit threats to report you to authorities if you don't conform?  I'm guessing not.  But how about implicit threats?  What if you've heard (or read in the news) stories about people who don't conform—in ways moderately but not hugely more extreme than you—having their careers ruined?  In any situation that you could call "peer pressure", I imagine there's always at least the possibility of some level of social exclusion.

The defining questions for that aspect would appear to be "Do you believe that you would face serious risk of punishment for not conforming?" and "Would a reasonable person in your situation believe the same?".  Which don't necessarily have the same answer.  It might, indeed, be that people whom you observe to be "conformist" are the ones who are oversensitive to the risk of social exclusion.

The thing that comes to mind, when I think of "formidable master of rationality", is a highly experienced engineer trying to debug problems, especially high-urgency problems that the normal customer support teams haven't been able to handle.  You have a fresh phenomenon, which the creators of the existing product apparently didn't anticipate (or if they did, they didn't think it worth adding functionality to handle it), which casts doubt on existing diagnostic systems.  You have priors on which tools are likely to still work, priors on which underlying problems are likely to cause which symptoms; tests you can try, each of which has its own cost and range of likely outcomes, and some of which you might invent on the spot; all of these lead to updating your probability distribution over what the underlying problem might be.

Medical diagnostics, as illustrated by Dr. House, can be similar, although I suspect the frequency of "inventing new tests to diagnose a never-before-seen problem" is lower there.

One argument I've encountered is that sentient creatures are precisely those creatures that we can form cooperative agreements with.  (Counter-argument: one might think that e.g. the relationship with a pet is also a cooperative one [perhaps more obviously if you train them to do something important, and you feed them], while also thinking that pets aren't sentient.)

Another is that some people's approach to the Prisoner's Dilemma is to decide "Anyone who's sufficiently similar to me can be expected to make the same choice as me, and it's best for all of us if we cooperate, so I'll cooperate when encountering them"; and some of them may figure that sentience alone is sufficient similarity.

So, the arithmetic and geometric mean agree when the inputs are equal, and, the more unequal they are, the lower the geometric mean is.

I note that the subtests have ceilings, which puts a limit on how much any one can skew the result.  Like, if you have 10 subtests, and the max score is something like 150, then presumably each test has a max score of 15 points.  If we imagine someone gets five 7s and five 13s (a moderately unbalanced set of abilities), then the geometric mean is 9.54, while the arithmetic mean is 10.  So, even if someone were confused about whether the IQ test was using a geometric or an arithmetic mean, does it make a large difference in practice?

The people you're arguing against, is it actually a crux for them?  Do they think IQ tests are totally invalid because they're using an arithmetic mean, but actually they should realize it's more like a geometric mean and then they'd agree IQ tests are great?

1. IQ scores do not measure even close to all cognitive abilities and realistically could never do that.

Well, the original statement was "sums together cognitive abilities" and didn't use the word "all", and I, at least, saw no reason to assume it.  If you're going to say something along the lines of "Well, I've tried to have reasonable discussions with these people, but they have these insane views", that seems like a good time to be careful about how you represent those views.

2. Many of the abilities that IQ scores weight highly are practically unimportant.

Are you talking about direct measurement, or what they correlate with?  Because, certainly, things like anagramming a word have almost no practical application, but I think it's intended to (and does) correlate with language ability.  But in any case, the truth value of the statement that IQ is "an index that sums together cognitive abilities" is unaffected by whether those abilities are useful ones.

Perhaps you have some idea of a holistic view, of which that statement is only a part, and maybe that holistic view contains other statements which are in fact insane, and you're attacking that view, but... in the spirit of this post, I would recommend confining your attacks to specific statements rather than to other claims that you think correlate with those statements.

3. Differential-psychology tests are in practice more like log scales than like linear scales, so "sums" are more like products than like actual suns; even if you are absurdly good at one thing, you're going to have a hard time competing with someone in IQ if they are moderately better at many things.

I wonder how large a difference this makes in practice.  So if we run with your claim here, it seems like your conclusion would be... that IQ tests combine the subtest scores in the wrong way, and are less accurate than they should be for people with very uneven abilities?  Is that your position?  At any rate, even if the numbers are logarithms, it's still correct to say that the test is adding them up, and I don't consider that good grounds for calling it "insane" for people to consider it addition.

thinks of IQ as an index that sums together cognitive abilities

Is this part not technically true?  IQ tests tend to have a bunch of subtests intended to measure different cognitive abilities, and you add up—or average, which is adding up and dividing by a constant—your scores on each subtest.  For example (bold added):

The current version of the test, the WAIS-IV, which was released in 2008, is composed of 10 core subtests and five supplemental subtests, with the 10 core subtests yielding scaled scores that sum to derive the Full Scale IQ.

localdeity5314

Interesting.  The natural approach is to imagine that you just have a 3-sided die with 2, 4, 6 on the sides, and if you do that, then I compute A = 12 and B = 6[1].  But, as the top Reddit comment's edit points out, the difference between that problem and the one you posed is that your version heavily weights the probability towards short sequences—that weighting being 1/2^n for a sequence of length n.  (Note that the numbers I got, A=12 and B=6, are so much higher than the A≈2.7 and B=3 you get.)  It's an interesting selection effect.

The thing is that, if you roll a 6 and then a non-6, in an "A" sequence you're likely to just die due to rolling an odd number before you succeed in getting the double 6, and thus exclude the sequence from the surviving set; whereas in a "B" sequence there's a much higher chance you'll roll a 6 before dying, and thus include this longer "sequence of 3+ rolls" in the set.

To illustrate with an extreme version, consider:

A: The expected number of rolls of a fair die until you roll two 6s in a row, given that you succeed in doing this.  You ragequit if it takes more than two rolls.

Obviously that's one way to reduce A to 2.

  1. ^

    Excluding odd rolls completely, so the die has a 1/3 chance of rolling 6 and a 2/3 chance of rolling an even number that's not 6, we have:

    A = 1 + 1/3 * A2 + 2/3 * A

    Where A2 represents "the expected number of die rolls until you get two 6's in a row, given that the last roll was a 6".  Subtraction and multiplication then yields:

    A = 3 + A2

    And if we consider rolling a die from the A2 state, we get:

    A2 = 1 + 1/3 * 0 + 2/3 * A
    = 1 + 2/3 * A

    Substituting:

    A = 3 + 1 + 2/3 * A
    => (subtract)
    1/3 * A = 4
    => (multiply)
    A = 12

    For B, a similar approach yields the equations:

    B = 1 + 1/3 * B2 + 2/3 * B
    B2 = 1 + 1/3 * 0 + 2/3 * B2

    And the reader may solve for B = 6.

Load More