You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Meetup report: How harmful is cannabis, and will you change your habits?

11 Post author: Nisan 09 September 2012 04:50AM

 

A week ago the meetup group in Berkeley discussed a new article in PNAS titled "Persistent cannabis users show neuropsychological decline from childhood to midlife". Several people who didn't attend said they were interested in how that conversation went.

Before discussing the specifics of the article, we went around the room and stated how many IQ points we'd be willing to spend for some level of cannabis use. The median answer given was 4 points for moderate usage. Then someone pointed out that since we responded out loud, there may have been an anchoring effect here.

My understanding of the group's understanding of the scientific result is that smoking so much marijuana that you're diagnosable as cannabis-dependent (whatever that means) before the age of 18 will give you an IQ hit of 9-11 points, maybe more, over 20 years, compared to nonusers.

People who were diagnosable as dependent on cannabis but not before 18 got an IQ hit of 4 or 5 points, on average. We don't know if this is because cannabis is bad for adults, or if it's bad for people just over 18.

People who have used cannabis but were not diagnosed as dependent got an IQ hit of 1 or 2 points on average. The article gives us little information on what the risks of various moderate levels of cannabis use are.

We didn't discuss any methodological errors in the study, but the general attitude of the group was that the scientific result is worth taking seriously.

After the discussion, people who use cannabis opportunistically or not at all — especially the younger attendees — said that after learning about the study they now have another reason not to use cannabis. One person who uses cannabis less than once per week said they wouldn't change their usage habits.

 

Comments (39)

Comment author: [deleted] 09 September 2012 12:18:22PM 12 points [-]

How was the direction of causality established? Maybe smart people are less likely to want to smoke marijuana, or nerdy people are less likely to develop connections that make marijuana available to them even if it's illegal where they are. IQ also negatively correlates with number of sexual partners, but I haven't seen anyone concluding that getting laid a lot makes you dumber.

Comment author: AlexMennen 09 September 2012 04:31:00PM 3 points [-]

They didn't just measure the IQ of marijuana users. They measured the change in IQ over a long time of people who used marijuana during that time (and of people who didn't, as a control group, of course).

Comment author: shminux 09 September 2012 05:44:58PM 12 points [-]

To establish causation you'd have to randomly assign people into groups, not let them self-select into marijuana users and control.

Comment author: gwern 09 September 2012 04:56:28PM 5 points [-]

Longitudinal comparisons are much better than a simple cross-section ('the marijuana smokers tend to be stupider, huh'), but you're still getting only a correlation. It's perfectly plausible - indeed, inevitable - that there are uncontrolled factors: the Big Five personality factor Conscientiousness comes to mind as a plausible trait which might lead to non-smoking and higher IQ.

(That said, I have not used marijuana and have no intention of doing so.)

Comment author: IlyaShpitser 10 September 2012 01:36:31AM 0 points [-]

Depending on what was measured, there are "well known ways" to correct for confounding in longitudinal observational studies.

Comment author: Decius 10 September 2012 04:26:19AM 0 points [-]

How do you correct for an unidentified common causation?

Comment author: IlyaShpitser 10 September 2012 04:53:20AM *  4 points [-]

If there is a mediating variable that captures all of the causal flow from "treatment" (smoking mrj) to "outcome" (iq), and moreover, this variable is not an effect of the unidentified common cause, you can use "the front door functional" (see Pearl's book) to get the causal effect.

If there is a variable that is a "strong cause" of the "treatment", but not of the "outcome" (except through treatment) then this variable is instrumental, and there are methods that will give you the causal effect using this variable.

If there is an observed effect of an unobserved common cause, and you know something about how this effect arose, there are methods for "reconstructing" the unobserved common cause, and then using the standard covariate adjustment formula.

In general (e.g. complex longitudinal cases), there is a neat algorithm due to Jin Tian that can handle all sorts of unobserved confounding. Not everything, of course. In general, unobserved confounders doom you.

If you want the full story, you can read for instance this paper:

http://ftp.cs.ucla.edu/pub/stat_ser/r336-published.pdf


There is also the issue of how to do this in practice with data and smart statistical methods, which is a long separate discussion.

Comment author: aelephant 10 September 2012 01:36:16PM 0 points [-]

Did they use this method in the statistical analysis of the study? It is behind a paywall for me.

Comment author: IlyaShpitser 10 September 2012 04:01:28PM 0 points [-]

Doesn't look like it from the wording (I will know for sure once I find the pdf).

Comment author: Decius 10 September 2012 05:16:52AM 0 points [-]

I don't think that correcting for the effects of various factors is on the same scale as controlling for them, and after going over your reference I am more sure of that.

Granted, correcting is often very much easier than controlling for complex factors, and allows for sample sizes to alter the scale again.

Comment author: IlyaShpitser 10 September 2012 05:29:47AM 0 points [-]

I don't know what you mean when you say "correcting" vs "controlling." Can you give some examples? I don't understand your last sentence at all.

Comment author: Decius 10 September 2012 04:17:27PM 1 point [-]

In both cases the goal is to measure the effect of one choice, including effects through intermediate causes, without including in that measurement any other factors.

Assuming a complex system and a fairly large sample, you can correct by gathering lots of data about as many things which might be factors as possible: If, however, there is an unmeasured trait "Does not enjoy mind-clouding events" which correlates both to increase in IQ and not smoking dope, it cannot be discovered by correction.

To control, you take your population and divide it into groups as evenly as possible along every axis that you can measure except the independent variable, and then force the independent variable of each group to be the same.

Maybe the definitions I'm using are different from the jargon, and if so I am 'wrong' in a real sense; what is the jargon for distinguishing between those two types of differentiation?

Comment author: IlyaShpitser 10 September 2012 05:02:30PM *  0 points [-]

Ok, when you say "correct" you mean you try to discover as many hidden variables in your DAG as possible and try to collect data on them such that they become observed. When you say "control" you mean a particular implementation of the adjustment formula: p(y | do(a)) = sum{c} p(y | a, c) p(c), where a is the treatment, y is the outcome, and c is measured covariates. (Note: using "independent/dependent" variable is not correct because those variables are not guaranteed to have a causal relationship you want -- an effect can be independent and a cause can be dependent).

The point of some of the work in causal inference, including the paper I linked is that in some cases you don't need to either "correct" or "control" in the senses of the words you are using. For example if your graph is:

A -> W -> Y, and there is an unobserved common cause U of A and Y, then you don't need to "correct" for the presence of this U by trying to measure it, nor can you "control" for U since you cannot measure it. What you can do is use the following formula: p(y | do(a)) = sum{w} p(w | a) sum{a'} p(y | w, a') p(a').

There are more complex versions of the same trick discussed in great detail in the paper I linked.

Comment author: NancyLebovitz 09 September 2012 02:49:58PM 3 points [-]

Also, people who make heavy use of marijuana may have other intelligence-lowering behaviors, like a heavy use of alcohol.

Comment author: Cruzc09 09 September 2012 02:55:27PM *  2 points [-]

Also, marijuana severely decreases motivation and level of motivation has been correlated with IQ. With frequent usage of any drug, I would say it would modify behavior. This is to say, if you get hi a lot, it probably would modify your normal behavior just from the basis of habitual formations. Am I correct to assume this?

There could also be self-fulfilling prophesies in the taking of the exam. Telling a guy, "You're a stoner, so we want you to taken an IQ test" probably does something to the test takers perception of himself/herself.

Comment author: CarlShulman 09 September 2012 07:34:40AM 11 points [-]

This is a correlational result mined from a pre-existing epidemiology study, attributing an effect to one of several possible subgroups. I wouldn't place that much weight in it without a lot more detail and supporting evidence.

Comment author: Mitchell_Porter 09 September 2012 05:53:32AM 16 points [-]

The median answer given was 4 points for moderate usage.

!! This was a LessWrong meetup? And half of them would be willing to sacrifice 4 IQ points or more, in order to smoke dope?! Please tell me they weren't taking the question seriously.

Comment author: AlexMennen 09 September 2012 06:28:28AM 16 points [-]

I was there, and I remember closer to a flat distribution between 0 and 5 IQ points. At any rate, I think 4 was a bit on the high side. Also, most people noted that they had a poor idea of how much difference an IQ point makes, and that this made them very uncertain about their answer. Someone suggested that if IQ was measured with a mean of 1000 and standard deviation of 150, people might still be giving answers of about 1 to 5 IQ points (as in, answers that would translate to 0.1 to 0.5 IQ points the way we actually measure them).

Comment author: Vaniver 09 September 2012 05:46:13PM 2 points [-]

Someone suggested that if IQ was measured with a mean of 1000 and standard deviation of 150, people might still be giving answers of about 1 to 5 IQ points (as in, answers that would translate to 0.1 to 0.5 IQ points the way we actually measure them).

But people have a clear analog for this: SAT scores. Someone willing to give up 5 points of IQ to smoke marijuana probably wouldn't balk at having to give up 20 points on the SAT.

Comment author: gwern 09 September 2012 07:34:07PM 1 point [-]

An SAT is of limited use after you're admitted to college, though, so the question is plausibly not the same - I would be willing to trade hundreds of points on my SAT scores now, since I am not in high school - and if they're interpreting SAT scores to be the same, then you might as well have asked about IQ in the first place.

Comment author: Vaniver 09 September 2012 08:07:11PM *  2 points [-]

I interpreted the point AlexMennen raised as "people like dealing with small integers, and so may be giving poor answers because they don't have the right context." My response was that there is a larger scale measure of intelligence that Americans have meaningful context with. That context is both where their scores / their friends' scores are and what life outcomes are impacted by those scores. For example, the 25th percentile SAT reading scores for Stanford / UC Berkeley / UCLA / Sac State are 670/600/570/410, and so one could interpret a 30 point drop on each SAT test as about the difference between being a C student at Berkeley and a C student at UCLA. 11 IQ points is about the difference between being a C student at Stanford and a C student at UCLA (but IQ-SAT score conversions are wonky now that they clipped the right side off of the SAT distribution).

Comment author: katydee 09 September 2012 12:02:02PM *  14 points [-]

To be fair, it was a LessWrong meetup in Berkeley...

Comment author: fubarobfusco 09 September 2012 06:16:26AM 20 points [-]

Consider that many cannabis users report increases in pleasure, sociability, sexual intimacy, and various other positive benefits; as well as relief from pain, anxiety, depression, eating disorders, etc. "Smoking dope" is not an end in itself!

Comment author: Bruno_Coelho 09 September 2012 03:39:42PM 3 points [-]

Drugs in general are used mostly in social contexts, with internal deliberation about IQ loss and near rewards. If you are aware, normally the goal change occur when the incentives come up to ask: how many IQ points do you sacrifice to conquer this friends/girl/promotion/.

Comment author: [deleted] 09 September 2012 12:29:49PM 2 points [-]

4 IQ points isn't that much. Someone who really considers that a big deal would likely also have to e.g. sleep more.

Comment author: NancyLebovitz 09 September 2012 04:14:38PM 3 points [-]
Comment author: Jonathan_Graehl 09 September 2012 06:36:01AM *  6 points [-]

I also thought the study was convincing in showing a significant correlation*

It's been folk belief for a long time that pot makes you slow, unmotivated, and dumb.

Therefore, I can't rule out that the study failed to control for giving a shit about one's mental abilities, a factor that's nearly certain to explain the outcome by a sum of all possible intermediaries, of which pot is just one.

[*] even though the number of people in the 8+ IQ point losing population seemed to be only about N=30. 1-2 IQ points difference is probably not significant, but I'm guessing N was larger, so maybe. Also, the same data set was mined for genetic correlates with heavy pot use, so they could have easily controlled for that (could turn out that e.g. the same factors cause persistent pot use and adulthood cognitive degeneration, though I judge that unlikely). Since there is so much demand from the DARE set for a finding like this, you also have to expect quite a lot of publication bias and population-cherry-picking, as well.

Comment author: Kevin 15 January 2013 03:21:33AM 2 points [-]
Comment author: pcm 10 September 2012 04:58:50AM 2 points [-]

I was one of the people who used the number 4 IQ points at the meeting, but we were all answering slightly different questions. Since I haven't used cannabis in a long time and didn't expect to use it regardless of the study, what I answered was more like "a 4 point IQ drop would significantly lower the upper bound of what cannabis use I'd consider trying".

I was tempted to point out the reasons to doubt the study, but I decided it's hardly news to say that cannabis impairs short-term memory (which ought to lower IQ) over some nonobvious time period, so the conclusion ought to be taken seriously even if the study is weak. In hindsight, it would have been more interesting to discuss the strength of the study's evidence.

Comment author: Kevin 09 September 2012 08:18:22AM 3 points [-]

I can always take more nootropics, right?

Comment author: John_Maxwell_IV 18 January 2013 06:00:31AM 1 point [-]
Comment author: Nisan 19 January 2013 04:48:54AM 1 point [-]

Correlations between cannabis use and IQ change in the Dunedin cohort are consistent with confounding from socioeconomic status (paywall link)

Does cannabis use have substantial and permanent effects on neuropsychological functioning? Renewed and intense attention to the issue has followed recent research on the Dunedin cohort, which found a positive association between, on the one hand, adolescent-onset cannabis use and dependence and, on the other hand, a decline in IQ from childhood to adulthood [Meier et al. (2012) Proc Natl Acad Sci USA 109(40):E2657–E2664]. The association is given a causal interpretation by the authors, but existing research suggests an alternative confounding model based on time-varying effects of socioeconomic status on IQ. A simulation of the confounding model reproduces the reported associations from the Dunedin cohort, suggesting that the causal effects estimated in Meier et al. are likely to be overestimates, and that the true effect could be zero. Further analyses of the Dunedin cohort are proposed to distinguish between the competing interpretations. Although it would be too strong to say that the results have been discredited, the methodology is flawed and the causal inference drawn from the results premature.

Comment author: gwern 19 January 2013 07:32:02PM 3 points [-]
Comment author: Cosmos 02 October 2012 09:05:12PM 0 points [-]

Then someone pointed out that since we responded out loud, there may have been an anchoring effect here.

This is standard epistemic hygiene - have everyone come up with an answer quietly before saying it out loud. (I suspect our natural inclination against lying is enough to keep people honest.)