A while ago I wrote how I managed to add 13 points to my IQ (as measured by the mean between 4 different tests).
I had 3 “self-experimenters” follow my instructions in San Francisco. One of them dropped off, since, surprise surprise, the intervention is hard.
The other two had an increase of 11 and 10 points in IQ respectively (using the “fluid” components of each test) and an increase of 9 and 7 respectively if we include verbal IQ.
A total of 7 people acted as a control and were given advantages on the test compared to the intervention group to exacerbate the effects of memory and motivation, only 1 scored on par with the intervention group. We get a very good p-value, considering the small n, both when comparing the % change in control vs intervention (0.04) and the before/after intervention values (0.006)
Working Hypothesis
My working hypothesis for this was simple:
If I can increase blood flow to the brain in a safe way (e.g. via specific exercises, specific supplements, and photostimulation in the NUV and NIR range)
And I can make people think “out of the box” (e.g. via specific games, specific “supplements”, specific meditations)
And prod people to think about how they can improve in whatever areas they want (e.g. via journaling, talking, and meditating)
Then you get this amazing cocktail of spare cognitive capacity suddenly getting used.
As per the last article, I can’t exactly have a step-by-step guide for how to do this, given that a lot of this is quite specific. I was rather lucky that 2 of my subjects were very athletic and “got it” quite fast in terms of the exercises they had to be doing.
The Rub
At this point, I’m confident all the “common sense” distillation on what people were experimenting with has been done, and the intervention takes quite a while.
Dedicating 4 hours a day to something for 2 weeks is one thing, but given that we’re engaging in a form of training for the mind, the participants need not only be present, but actively engaged.
A core component of my approach is the idea that people can (often non-conceptually) reason through their shortcomings if given enough spare capacity, and reach a more holistic form of thinking.
I’m hardly the first to propose or observe this, though I do want to think my approach is more well-proven, entirely secular, and faster. Still, the main bottleneck remains convincing people to spend the time on it.
What’s next
My goal when I started thinking about this was to prove to myself that the brain and the mind are more malleable than we think, that relatively silly and easy things, to the tune of:
A few supplements and 3-4 hours of effort a day for 2 weeks, can change things that degrade with aging and are taken as impossible to reverse
Over the last two months, I became quite convinced there is something here… I don’t quite understand its shape yet, but I want to pursue it.
At present, I am considering putting together a team of specialists (which is to say neuroscientists and “bodyworkers”), refining this intervention with them, and selling it to people as a 2-week retreat.
But there’s also a bunch of cool hardware that’s coming out of doing this
As well as a much better understanding of the way some drugs and supplements work… and understanding I could package together with the insanely long test-and-iterate decision tree to use these substances optimally (more on this soon).
There was some discussion and interested expressed by the Lighthaven team in the previous comment section to replicate, and now that I have data from more people I hope that follows through, I'd be high-quality data from a trustworthy first party, and I'm well aware at this point this should still hit the "quack" meter for most people.
I'm also independently looking for:
- People to help me get better psychometrics, the variance in my dataset is huge and my tests stop working at 3 STDs of IQ, for the most part. I'd love to have one or two more comprehensive tests that are sensitive to analysesup to 5 STDs
- People to run independent analysis on the data, in whatever way they see fit. If you are a professor or otherwise system-recognized expert in the area this would be especially useful. I think the analysis here is quite trivial and "just look at the numbers" is sufficient, but having external validation also helps.
For now, I’m pretty happy to explain to anyone who wants to do this intervention themselves what it involved for me (for free, I want the data), my disclaimers are as follows:
I am not a doctor, and anything that I suggest might be unsafe, you do at your own risk, I guarantee neither the results nor the safety profile for what I did.
I prefer to work with groups of between 2 or 3 people.
I can’t be physically present to help you, but we can have a Zoom call every couple of days.
I expect you to bring 3 to 5 controls along for the ride, without them the data is much weaker, the more similar the controls are to you (in terms of environment and genetics) the better.
My current approach involves dedicating at least 3 to 4 hours of your day to this, wholeheartedly: in a way that’s consistent, involved, and enthusiastic.
The specialists you’ll need to hire and the hardware you’ll need to buy might well drive you past the 10k point (for a group of 3 people) if you do this properly, and you might need a week of scouting to find the right people to work with you.
That being said, since a lot of people were excited to follow through with this last time, I am now putting this offer out there.
.
.
.
Confounder elimination
There are a few confounders in a self-experiment like this:
You are just taking people who are not supplementing or eating properly and you are making them use common-sense meals/supplements
You are taking people who don’t exercise and making them exercise because exercise is magic this will result in a positive change but is boring (because exercise is hard)
You are doing a tradeoff to increase performance on the IQ test (e.g. giving them caffeine and or Adderall)
You are not taking into account memorization happening on the IQ tests
The subjects are “more motivated” to perform when redoing the tests
I have addressed all of these:
The subjects kept the same diet and the same supplement stack they used before, I only added 6 things on top. They are both pretty high up the food chain of supplement optimization, one ran 2 healthcare companies and worked with half a dozen — the other one is his partner
The subjects are both semi-professional athletes, exercising for > 2 hrs a day, able to run marathons and ironmans
The subjects’ HR and BP were monitored and no changes happened, no supplements whatsoever were taken > 24hrs before re-taking the IQ tests
I had controls, and 2 of my controls took the tests 24 hours apart, to “maximize” memorization effects
I had controls that were being paid sums between 40 and 100$ (adjusted to be ~2x their hourly pay rate) for every point of IQ gained upon retaking the tests
So how do the numbers look after I control them?
Intervention mean increases: (11.2 [9%], 9.6 [8%], 12.6 [10%]) (mean of means: 11.1) - Average increase: 9.3%
Control mean increase: (14.2 [12%], 4.4 [3%], 8.8 [7%], 7.6 [6%], 5.2 [4%], 5.6 [5%], 3.2 [2%]) (mean of means: 7.0) - Average increase: 5.9%
Controlled mean increase: 4.1
Related T-test between the before/after means for the intervention: -12.846 (p=0.006)
Related T-test between the before/after means for the control: -5.015 (p=0.002)
Independent T-test between the before/after difference between intervention and control: -2.46 (p=0.04)
I’d say pretty damn nice given that the controls are going above and beyond in taking the tests under better conditions and with more incentives than the intervention. I am testing a “worse case” scenario here and even in a worst-case scenario 1/3 of the finding holds.
My speculation is that most of the control data is just memorization or incentives. For one the variance between controls is huge (And the p values reflect this).
For seconds, let’s look at verbal IQ:
Intervention mean increases: (0.0 [0%], 5.0 [4%], -16.0 [-14%]) (mean of means: -3.7) - Average increase: -3.4%
Control mean increase: (18.0 [16%], 25.0 [25%], 14.0 [13%], 13.0 [10%], 2.0 [1%], 10.0 [8%], -5.0 [-4%]) (mean of means: 11.0) - Average increase: 10.2%
Controlled mean increase: -14.7
Related T-test between the before/after means for the intervention: 0.579 (p=0.621)
Related T-test between the before/after means for the control: -2.92 (p=0.027)
Independent T-test between the before/after difference between intervention and control: 2.032 (p=0.115)
So the fluid component has a +4.1 diff, and the verbal component (which we expect to be stable) has a -14.7 diff. That to me indicates the controls are “trying harder” or “memorizing better” in a way that the intervention group isn’t.
Overall this doesn’t matter, the finding is significant and of an unexpected magnitude either way.
But I do feel like it’s important to stress that I am controlling for the worst-case scenario, and still getting an unambiguously positive result. This approach is not typical in science, where the control and intervention are equally matched, as opposed for the control being optimized to eliminate any and all potential confounders.
Right, Quantified Mind tests are not normed, so you couldn't say "participants added 10 IQ points" or even "this participant went from 130 to 140".
However, they do have a lot of data from other test-takers, so you can say, "participants increased 0.7 SDs [amidst the population of other QM subjects]" or "this participant went from +2.0 to +2.7 SDs", broken down very specifically by subskill. You are not going to get any real statistical power using full IQ tests.
It sounds like the protocols involve hours of daily participant effort over multiple weeks. Compared to that, it seems doable to have them do 5-10 minutes of daily baseline psychometrics (which double as practice) for 2-4 weeks before the experimental protocols begin? This amount of practice washout might not be enough, but if your effects are strong, it might.
In reality, that's table stakes for measuring cognitive effects from anything short of the strongest of interventions (like giving vs. withholding caffeine to someone accustomed to having it). I recall the founder of Soylent approached us at the beginning, wanting to test whether it had cognitive benefits. When we told him how much testing he would need to have subjects do, he shelved the idea. A QM-like approach reduces the burden of cognitive testing as much as possible, but you can't reduce it further than this, or you can't power your experiments.
On a more positive note, if you have a small number of participants who are willing to cycle your protocols for a long time, you can get a lot of power by comparing the on- and off-protocol time periods. So if this level of testing and implementation of protocols would be too daunting to consider for dozens of participants, but you have four hardcore people who can do it all for half a year, then you can likely get some very solid results.
If I sound skeptical about expected measured effects from cognitive testing due to various interventions, it's because, as I recall, virtually none of the experiments we ran (on our selves, with academic collaborators from Stanford, from QS volunteers, etc.) ever led to any significant increases. The exceptions were all around removing negative interventions (being tired, not having your normal stimulants, alcohol, etc.); the supposed positives (meditation, nootropics, music, exercise, specific nutrients, etc.) consistently either did roughly nothing or had a surprising negative effect (butter). What this all reinforced:
This gives me a strong prior against most of the "intervention X boosts cognition!" claims. ("How would you know?")
Still, I'm fascinated by this area and would love to see someone do it right and find the right interventions. If you offset different interventions in your protocols, you can even start to measure which pieces of your overall cocktail work, in general and for specific participants, and which can be skipped or are even hurting performance. I have a very old and poorly recorded talk on a lazy way to do this.
One last point: all of this kind of psychometric testing, like IQ tests, only measures subjects' alert, "aroused" performance, which is close to peak performance and is very hard to affect. Even if you're tired and not at your best but just plodding along, when someone puts a cognitive test in front of you, boom, let's go, wake up, it's time–energy levels go up, test goes well, and then back to your slump. Most interventions that might make you generally more alert and significantly increase average, passive performance will end up having a negligible impact on the peak, active performance that the tests are measuring. If I were building more cognitive testing tools these days, I would try to build things that infer mental performance passively, without triggering this testing arousal. Perhaps that is where the real impacts from interventions are plentiful, strong, and useful.