You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwern comments on Problems in Education - Less Wrong Discussion

65 Post author: ThinkOfTheChildren 08 April 2013 09:29PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (318)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 12 April 2013 05:37:19PM *  1 point [-]

What effects could cause an increase of 8 points on a properly normed test across the board? Why would there a significant benefit to being in the control group of this study?

I already gave you three separate explanations for why an increase is possible, even in controls.

your only possible conclusion is incompetence, which isn't evidence which should change your priors. Incompetence is the social equivalent of the null hypothesis, and there is very rarely any significant evidence against it.

I have no idea what you mean by this, and I think that if one accepts their incompetence, the best thing to do is to ignore their data as having been poisoned in unknown ways - maliciousness, ideology, and stupidity often being difficult to tell apart.

Assuming only incompetence as you have, the expected result would be equally erratic for all students.

Why is that? The competent result is, since IQ interventions almost universally fail (our prior for any result like 'we increased IQ by 8 points' ought to be very low, as in, well below 1%, because hundreds of interventions have failed to pan out and 8 points is astounding and practically on the level of iodization) and the followups confirm that there is only a much much smaller effect, that there is no or a small effect. Any incompetence is going to lead to an extreme result. Like what they found.

As you say, it's been confirmed by other studies.

'Confirmed'? Well, this is an active debate as to what counts as a replication. Near the same magnitude or just having the same sign? If someone publishes a study claiming to find a weight loss drug that will drop 100 pounds, and exhaustive replications find that the true estimate is actually 1 pound, has the original claim been "confirmed"? After all, both estimates are non-zero and both estimates have the same sign...

Comment author: Decius 13 April 2013 01:01:18AM -1 points [-]

So, "systematic bias or selection effect or regression to the mean" can result in average properly normed IQ scores increasing by 8 points? Doesn't the normalizing process (when done properly) force the average score to remain constant?

Comment author: gwern 13 April 2013 01:26:57AM 1 point [-]

Doesn't the normalizing process (when done properly) force the average score to remain constant?

What normalizing process? You mean the one the paid psychometricians go through years before any specific test is purchased by researchers like the ones doing the Pygmalion study? Yeah, I suppose so, but that's irrelevant to the discussion.

Comment author: Decius 13 April 2013 02:17:37AM -1 points [-]

Right- because the entire population going up half a SD in a year isn't unusual at all, and the test purchased for use in this study was normalized the way one would expect it to be, despite the fact that it had results that are impossible if it was normalized in that manner.

Comment author: gwern 13 April 2013 02:51:06AM *  2 points [-]

...'entire population'?

Alright, I have to admit I have no idea what test you are now referring to. I thought we were discussing the Pygmalion results in which a small sample of elementary school students turned in increased IQ scores, which could be explained by a number of well-known and perfectly ordinary processes.

But it seems like you're talking about something completely else and may be thinking of country-level Flynn effects or something, I have no idea what.

Comment author: Decius 14 April 2013 12:36:38AM 0 points [-]

The PitC study showed an 8 point IQ increase in the control group. You offered those three explanations and said that they explained why that wasn't particularly unusual, and my understanding of normed IQ tests is that they are expected to remain constant over short times.

Comment author: gwern 14 April 2013 12:48:28AM 1 point [-]

normed IQ tests is that they are expected to remain constant over short times.

Over the general average population when tested once, yes. But the control group is neither general nor average nor the population nor tested once.

Comment author: Decius 14 April 2013 08:38:34AM 0 points [-]

If the control group isn't at least representative, there is a different methodology flaw. If the confounding factor of prior IQ tests wasn't measured, given that there is apparently a significant increase in scores on the first retest (and presumably a diminishing increase in scores at some point; the expected result of taking the test very many times isn't to become the highest scorer ever), there is an unaccounted confounding factor.

I'm still trying to figure out what questions to ask before I dig up as much primary source as I can. Is "points of normed IQ" the right thing to measure? That would imply that going from an IQ of 140 to 152 is equally as much a gain as going from 94 to 106. Is raw score the right thing to measure? That would imply that going from being able to answer 75% of the questions accurately to 80% is equally as much gain as going from 25% to 30%. Is the percentage decrease in incorrect answers the correct metric? 75%-80% would be the same as 25%-40%. The percentage increase in correct answers? 25%-30% (20% increase) would be equivalent to 75%-90%.

I'm still reluctant to accept class grades and state-mandated graduation test scores as measuring primarily intelligence or even mastery of the material, rather than the specific skill of taking the test. That makes my error bars larger than those of someone who does accept them as accurate measurements of something important.

Comment author: gwern 14 April 2013 08:15:50PM 2 points [-]

Is "points of normed IQ" the right thing to measure?

No, usually in these cases you will be using an effect size like Cohen's d: expressing the difference in standard deviations (on the raw score) between the two groups. You can convert it back to IQ points if you want; if you discover a d of 1.0, that's boosting scores by 1 standard deviation which is usually defined as something like 15 IQ points, and so on.

So if you have your standard paradigmatic experiment (an equal number of controls and experimentals, the two groups having exactly the same beginning mean IQ and standard deviation of the scores), you'd do your intervention, do a retest of IQ, and your effect size would be 'IQ(bigger) - IQ(smaller) / standard deviation of experimentals & controls'. Some of the things that these approaches do:

  1. test-retest concerns disappear, because you're not looking at the difference between the first test and the second test within groups, but just the difference in the second test between groups. Did the practice effect give them all 1 point, 5 points, 10 points? Doesn't matter, as long as it applies to both groups equally and their pre-tests were also equal. The first test is there to make sure you aren't accidentally picking a group of geniuses and a group of dunces and that the two groups started off equivalent. (Fun fact: the single strongest effect in my n-back meta-analysis is from when a group on the pre-test answered like 4 questions more than any of the others; even though their score dropped on the post-test, because the assumption that the groups were equivalent is built into the meta-analysis, they still look like n-back had an effect size of like d=3 or something crazy like that.)
  2. you're not converting to IQ points, but using the raw score. This avoids the discreteness issue (suppose the test has 10 questions on it. What does it then mean to convert scores on it to its normed range of 70-130 IQ or whatever? getting even a single additional question right is worth 10 points!)
  3. you avoid the issues of IQ points being 'worth' different amounts at different parts of the range. Suppose you took a bunch of IQ 130 kids and did something to boost their scores by 5 points. Is this easier, as hard, or harder than taking a bunch of IQ 100 kids and boosting them 5 points? If there's any differences, we might expect to see them reflected in the standard deviation being larger or narrower, and so this'll be reflected in our effect size.

Effect sizes are also the sine qua non of meta-analyses, so by thinking in effect sizes, you can more easily run a meta-analysis yourself if you want (like my own dual n-back meta-analysis on a widely-touted intervention which is supposed to increase IQ), you can interpret meta-analyses better, and you can draw on previous meta-analyses as priors (example: Jaeggi et al 2008 found n-back had an effect size on IQ of something like d=0.8. If one had seen one of the psychology-wide compilations of previous meta-analyses, one would know that replicated & verified effect sizes that large are pretty rare in every area of psychology, and so it was highly likely that their result was being overstated somehow, as indeed it turned out to have been overstated due to a use of passive control groups, and the current best estimate is closer to half that size or d=0.4).

I'm still reluctant to accept class grades and state-mandated graduation test scores as measuring primarily intelligence or even mastery of the material, rather than the specific skill of taking the test.

If IQ is the main cause of getting high class grades and passing cutoffs on tests and being able to learn test-taking skills (like learning any other skill), then couldn't the tests be measuring all of them simultaneously?

Comment author: Decius 15 April 2013 07:34:44PM 0 points [-]

... For some reason I thought the first test was used to evenly distribute performance on the pretest between the two groups. Aren't the control and experimental groups supposed to be as close to identical as possible, and to help analysis identify which subgroups, if any, had effects different from other subgroups? If an intervention showed significantly different results for tall people than for short people, then a study of that intervention on people based on height may be indicated.

I'm still reluctant to accept class grades and state-mandated graduation test scores as measuring primarily intelligence or even mastery of the material, rather than the specific skill of taking the test.

If IQ is the main cause of getting high class grades and passing cutoffs on tests and being able to learn test-taking skills (like learning any other skill), then couldn't the tests be measuring all of them simultaneously?

That's carryover from a different branch, sorry.