The median answer given was 4 points for moderate usage.
!! This was a LessWrong meetup? And half of them would be willing to sacrifice 4 IQ points or more, in order to smoke dope?! Please tell me they weren't taking the question seriously.
Consider that many cannabis users report increases in pleasure, sociability, sexual intimacy, and various other positive benefits; as well as relief from pain, anxiety, depression, eating disorders, etc. "Smoking dope" is not an end in itself!
I was there, and I remember closer to a flat distribution between 0 and 5 IQ points. At any rate, I think 4 was a bit on the high side. Also, most people noted that they had a poor idea of how much difference an IQ point makes, and that this made them very uncertain about their answer. Someone suggested that if IQ was measured with a mean of 1000 and standard deviation of 150, people might still be giving answers of about 1 to 5 IQ points (as in, answers that would translate to 0.1 to 0.5 IQ points the way we actually measure them).
Someone suggested that if IQ was measured with a mean of 1000 and standard deviation of 150, people might still be giving answers of about 1 to 5 IQ points (as in, answers that would translate to 0.1 to 0.5 IQ points the way we actually measure them).
But people have a clear analog for this: SAT scores. Someone willing to give up 5 points of IQ to smoke marijuana probably wouldn't balk at having to give up 20 points on the SAT.
An SAT is of limited use after you're admitted to college, though, so the question is plausibly not the same - I would be willing to trade hundreds of points on my SAT scores now, since I am not in high school - and if they're interpreting SAT scores to be the same, then you might as well have asked about IQ in the first place.
I interpreted the point AlexMennen raised as "people like dealing with small integers, and so may be giving poor answers because they don't have the right context." My response was that there is a larger scale measure of intelligence that Americans have meaningful context with. That context is both where their scores / their friends' scores are and what life outcomes are impacted by those scores. For example, the 25th percentile SAT reading scores for Stanford / UC Berkeley / UCLA / Sac State are 670/600/570/410, and so one could interpret a 30 point drop on each SAT test as about the difference between being a C student at Berkeley and a C student at UCLA. 11 IQ points is about the difference between being a C student at Stanford and a C student at UCLA (but IQ-SAT score conversions are wonky now that they clipped the right side off of the SAT distribution).
Drugs in general are used mostly in social contexts, with internal deliberation about IQ loss and near rewards. If you are aware, normally the goal change occur when the incentives come up to ask: how many IQ points do you sacrifice to conquer this friends/girl/promotion/.
4 IQ points isn't that much. Someone who really considers that a big deal would likely also have to e.g. sleep more.
How was the direction of causality established? Maybe smart people are less likely to want to smoke marijuana, or nerdy people are less likely to develop connections that make marijuana available to them even if it's illegal where they are. IQ also negatively correlates with number of sexual partners, but I haven't seen anyone concluding that getting laid a lot makes you dumber.
They didn't just measure the IQ of marijuana users. They measured the change in IQ over a long time of people who used marijuana during that time (and of people who didn't, as a control group, of course).
To establish causation you'd have to randomly assign people into groups, not let them self-select into marijuana users and control.
Longitudinal comparisons are much better than a simple cross-section ('the marijuana smokers tend to be stupider, huh'), but you're still getting only a correlation. It's perfectly plausible - indeed, inevitable - that there are uncontrolled factors: the Big Five personality factor Conscientiousness comes to mind as a plausible trait which might lead to non-smoking and higher IQ.
(That said, I have not used marijuana and have no intention of doing so.)
Depending on what was measured, there are "well known ways" to correct for confounding in longitudinal observational studies.
If there is a mediating variable that captures all of the causal flow from "treatment" (smoking mrj) to "outcome" (iq), and moreover, this variable is not an effect of the unidentified common cause, you can use "the front door functional" (see Pearl's book) to get the causal effect.
If there is a variable that is a "strong cause" of the "treatment", but not of the "outcome" (except through treatment) then this variable is instrumental, and there are methods that will give you the causal effect using this variable.
If there is an observed effect of an unobserved common cause, and you know something about how this effect arose, there are methods for "reconstructing" the unobserved common cause, and then using the standard covariate adjustment formula.
In general (e.g. complex longitudinal cases), there is a neat algorithm due to Jin Tian that can handle all sorts of unobserved confounding. Not everything, of course. In general, unobserved confounders doom you.
If you want the full story, you can read for instance this paper:
http://ftp.cs.ucla.edu/pub/stat_ser/r336-published.pdf
There is also the issue of how to do this in practice with data and smart statistical methods, which is a long separate discussion.
I don't think that correcting for the effects of various factors is on the same scale as controlling for them, and after going over your reference I am more sure of that.
Granted, correcting is often very much easier than controlling for complex factors, and allows for sample sizes to alter the scale again.
I don't know what you mean when you say "correcting" vs "controlling." Can you give some examples? I don't understand your last sentence at all.
In both cases the goal is to measure the effect of one choice, including effects through intermediate causes, without including in that measurement any other factors.
Assuming a complex system and a fairly large sample, you can correct by gathering lots of data about as many things which might be factors as possible: If, however, there is an unmeasured trait "Does not enjoy mind-clouding events" which correlates both to increase in IQ and not smoking dope, it cannot be discovered by correction.
To control, you take your population and divide it into groups as evenly as possible along every axis that you can measure except the independent variable, and then force the independent variable of each group to be the same.
Maybe the definitions I'm using are different from the jargon, and if so I am 'wrong' in a real sense; what is the jargon for distinguishing between those two types of differentiation?
Ok, when you say "correct" you mean you try to discover as many hidden variables in your DAG as possible and try to collect data on them such that they become observed. When you say "control" you mean a particular implementation of the adjustment formula: p(y | do(a)) = sum{c} p(y | a, c) p(c), where a is the treatment, y is the outcome, and c is measured covariates. (Note: using "independent/dependent" variable is not correct because those variables are not guaranteed to have a causal relationship you want -- an effect can be independent and a cause can be dependent).
The point of some of the work in causal inference, including the paper I linked is that in some cases you don't need to either "correct" or "control" in the senses of the words you are using. For example if your graph is:
A -> W -> Y, and there is an unobserved common cause U of A and Y, then you don't need to "correct" for the presence of this U by trying to measure it, nor can you "control" for U since you cannot measure it. What you can do is use the following formula: p(y | do(a)) = sum{w} p(w | a) sum{a'} p(y | w, a') p(a').
There are more complex versions of the same trick discussed in great detail in the paper I linked.
It is the independent variable in a controlled study because the study makes that variable independent of all other variables. It doesn't matter if normally U->A, in the controlled study A is determined by sorting into groups. Instead of observing A, A is decided by fiat.
The formulae only work if you have a graph of what you believe the causal chain might be, and gather data for each step that you have. If, for example, you think the chain is A->W->Y, with a potential U->A and U->Y, but the actual chain is U->Not-W; U->Y; A->W and A->Y, you provides bad advice to people who wish Y or Not-Y and are deciding on A.
"Independent/dependent" variables are used when talking about functions and regression models, even when those functions and regression models are not causal. For this reason, I believe it is confusing usage. Ordinary statistical regressions are invertible, causal regressions are not.
The formulae are correct iff the graph is correct, that is true. I am not sure what you are trying to say. If your assumptions are wrong, your entire analysis is garbage. This is true of any analysis. Are you saying anything beyond this? Please clarify what you mean.
With controlled experimentation, one can be almost certain that the effect measured is due to the variable modified. It doesn't matter if you have a correct graph of the confounding factors, because you balance them against each other.
What you are doing is measuring the combined strength of all chains of the type A->?->Y
Even in randomized trials you need to worry about assumptions. For example, you have to worry that your samples represent the general population. You have to worry that the actual random assignment with the people you have in your study well approximated the ideal random assignment in an infinite population. You then have to worry about modeling assumptions if you are doing statistical modeling on top of that. It is true you don't need assumptions that link observational and interventional quantities if you randomize.
"What you are doing is measuring the combined strength of all chains of the type A->?->Y"
If the graph is as I described that's what you want (e.g. the causal effect, e.g. the variation in Y under randomizing A).
I don't do random assignment. I divide the sample set into two or more groups that are as close to identical as possible, including their prior variation along A. Figuring out if one split is closer than a different one is nontrivial.
The only random decision is which group gets which A.
Also, people who make heavy use of marijuana may have other intelligence-lowering behaviors, like a heavy use of alcohol.
Also, marijuana severely decreases motivation and level of motivation has been correlated with IQ. With frequent usage of any drug, I would say it would modify behavior. This is to say, if you get hi a lot, it probably would modify your normal behavior just from the basis of habitual formations. Am I correct to assume this?
There could also be self-fulfilling prophesies in the taking of the exam. Telling a guy, "You're a stoner, so we want you to taken an IQ test" probably does something to the test takers perception of himself/herself.
This is a correlational result mined from a pre-existing epidemiology study, attributing an effect to one of several possible subgroups. I wouldn't place that much weight in it without a lot more detail and supporting evidence.
I also thought the study was convincing in showing a significant correlation*
It's been folk belief for a long time that pot makes you slow, unmotivated, and dumb.
Therefore, I can't rule out that the study failed to control for giving a shit about one's mental abilities, a factor that's nearly certain to explain the outcome by a sum of all possible intermediaries, of which pot is just one.
[*] even though the number of people in the 8+ IQ point losing population seemed to be only about N=30. 1-2 IQ points difference is probably not significant, but I'm guessing N was larger, so maybe. Also, the same data set was mined for genetic correlates with heavy pot use, so they could have easily controlled for that (could turn out that e.g. the same factors cause persistent pot use and adulthood cognitive degeneration, though I judge that unlikely). Since there is so much demand from the DARE set for a finding like this, you also have to expect quite a lot of publication bias and population-cherry-picking, as well.
I was one of the people who used the number 4 IQ points at the meeting, but we were all answering slightly different questions. Since I haven't used cannabis in a long time and didn't expect to use it regardless of the study, what I answered was more like "a 4 point IQ drop would significantly lower the upper bound of what cannabis use I'd consider trying".
I was tempted to point out the reasons to doubt the study, but I decided it's hardly news to say that cannabis impairs short-term memory (which ought to lower IQ) over some nonobvious time period, so the conclusion ought to be taken seriously even if the study is weak. In hindsight, it would have been more interesting to discuss the strength of the study's evidence.
Correlations between cannabis use and IQ change in the Dunedin cohort are consistent with confounding from socioeconomic status (paywall link)
Does cannabis use have substantial and permanent effects on neuropsychological functioning? Renewed and intense attention to the issue has followed recent research on the Dunedin cohort, which found a positive association between, on the one hand, adolescent-onset cannabis use and dependence and, on the other hand, a decline in IQ from childhood to adulthood [Meier et al. (2012) Proc Natl Acad Sci USA 109(40):E2657–E2664]. The association is given a causal interpretation by the authors, but existing research suggests an alternative confounding model based on time-varying effects of socioeconomic status on IQ. A simulation of the confounding model reproduces the reported associations from the Dunedin cohort, suggesting that the causal effects estimated in Meier et al. are likely to be overestimates, and that the true effect could be zero. Further analyses of the Dunedin cohort are proposed to distinguish between the competing interpretations. Although it would be too strong to say that the results have been discredited, the methodology is flawed and the causal inference drawn from the results premature.
Then someone pointed out that since we responded out loud, there may have been an anchoring effect here.
This is standard epistemic hygiene - have everyone come up with an answer quietly before saying it out loud. (I suspect our natural inclination against lying is enough to keep people honest.)
A week ago the meetup group in Berkeley discussed a new article in PNAS titled "Persistent cannabis users show neuropsychological decline from childhood to midlife". Several people who didn't attend said they were interested in how that conversation went.
Before discussing the specifics of the article, we went around the room and stated how many IQ points we'd be willing to spend for some level of cannabis use. The median answer given was 4 points for moderate usage. Then someone pointed out that since we responded out loud, there may have been an anchoring effect here.
My understanding of the group's understanding of the scientific result is that smoking so much marijuana that you're diagnosable as cannabis-dependent (whatever that means) before the age of 18 will give you an IQ hit of 9-11 points, maybe more, over 20 years, compared to nonusers.
People who were diagnosable as dependent on cannabis but not before 18 got an IQ hit of 4 or 5 points, on average. We don't know if this is because cannabis is bad for adults, or if it's bad for people just over 18.
People who have used cannabis but were not diagnosed as dependent got an IQ hit of 1 or 2 points on average. The article gives us little information on what the risks of various moderate levels of cannabis use are.
We didn't discuss any methodological errors in the study, but the general attitude of the group was that the scientific result is worth taking seriously.
After the discussion, people who use cannabis opportunistically or not at all — especially the younger attendees — said that after learning about the study they now have another reason not to use cannabis. One person who uses cannabis less than once per week said they wouldn't change their usage habits.