James_Miller comments on Experiment Idea Thread - Spring 2011 - Less Wrong

28 Post author: Psychohistorian 06 May 2011 06:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread.

Comment author: James_Miller 06 May 2011 06:35:56PM 12 points [-]

Does the Dual n-back increase intelligence?

Some evidence indicates that playing the videogame Dual n-back increases working memory and fluid intelligence.

A group of us would first take a memory test. Next, a randomly selected subgroup would play the Duel n-back a few hours a week for, say, a month. Then, both groups would take another memory test. Next, we would wait, say, two months with no one playing the game. Finally, the two groups would again take a memory test. We could probably still learn a lot by omitting the control group.

Here is a free version of the game.

Comment author: D_Malik 06 May 2011 08:22:28PM 3 points [-]

In addition to the memory test, we should also use some fluid intelligence test (like RAPM). It would probably be good to use unspeeded RAPM and other fluid intelligence tests (rather than speeded RAPM, which is controversial.).

Also, we should investigate different modes, like multiple stimuli and arithmetic and crab n-back.

Comment author: Louie 08 May 2011 06:41:49PM 0 points [-]

A few of us at Singularity Institute tested Dual N-Back last year. For 1 week, 13 people were tested on dissimilar metrics of intelligence while some of them performed the same kind of Dual N-Back done in the original Jaeggi study.

Conclusion: It doesn't make you smarter.

Bonus: You get better at Dual N-Back though!

Interestingly, at around the same time as we were doing our tests last last year, the original research "replicated" her own results and published them again using new data. I'm sort of confused. I don't want to say Jaeggi doesn't understand training and practice effects... but I'm struggling to understand how else to explain this.

That said, it would still be cool to see LW folks test IA interventions. I just recommend exploring more promising ones. Perhaps seeking to confirm the results of these studies instead?

Comment author: AnnaSalamon 08 May 2011 11:42:57PM *  12 points [-]

Louie, I don't remember the details of this. I thought folks ended up with a very small and not-very-powerful study (such that the effect would have had to be very large to show up anyhow), with the main goal of the "study", such as it was, being to test our procedures for running future potential experiments?

Could you refresh my memory on what tests were run?

Also, speaking about how "the folks at SIAI believe X" based on small-scale attempt run within the visiting fellows program last summer seems misleading; it may inaccurately seem to folks as though you're speaking for Eliezer, Michael Vassar, or others. I was there (unlike Eliezer or Michael) and I don't recall having the beliefs you mention.

Comment author: Will_Newsome 10 May 2011 11:47:20AM *  1 point [-]

This definitely agrees with my memory, too...

I mostly felt compelled to actually sign in and comment because I wanted to point out that designating something as "[t]he opinion of folks at Singularity Institute" or the like is often an annoying simplification/inaccuracy that allows people to thenceforth feel justified in using their social cognition to model a group roughly as they would a person.

Comment author: gwern 11 May 2011 02:34:28PM 0 points [-]

When I asked back in September 2010, you said

They're ad hoc, we've used one for a dual n-back study which ended up yielding insufficient data....We didn't study long enough to get any statistically significant data. Like, not even close....really, there's no information there, no matter how much Bayes magic you use

So at least your memory has been consistent over the last 9 months.

Comment author: cousin_it 08 May 2011 06:49:05PM *  8 points [-]

Hmm. If your replication attempt was good science, you could help the world by publishing it. If it wasn't good science, you probably shouldn't update on it very strongly.

Comment author: [deleted] 08 May 2011 07:08:50PM *  4 points [-]

For 1 week, 13 people were tested on dissimilar metrics of intelligence

One week seems very short compared to the studies. I didn't check all, but one mentioned in Wikipedia was 5 weeks and another whose abstract I found was 4 weeks. It seems probable that any effects on the brain would be slow to accumulate. As a point of comparison, it takes a lot longer than one week to assess the value of a particular muscle training program.