the Nine Dot Problem, where you are asked to connect the following 3 by 3 grid of black dots using only four lines:
The problem (as described) is trivial - it's missing a constraint. From the article linked:
The task is to connect all 9 dots using exactly 4 straight lines, without retracing or removing one's pen from the paper.
Oops, I fixed that in my blog version and then accidentally posted the old draft here. Edited now, thank you!
Cross posted from my personal blog.
Last month I finally got round to reading The Eureka Factor by John Kounios and Mark Beeman, a popular book summarising research on 'insightful' thinking. I first mentioned it a couple of years ago after I'd read a short summary article, when I realised it was directly relevant to my recurring 'two types of mathematician' obsession:
I wasn't too sure what I was getting into. The replication crisis has made me hyperaware of the dangers of uncritically accepting any results in psychology, and I'm way too ignorant of the field to have a good sense for which results still look plausible. However, the book turned out to be so extraordinarily Relevant To My Interests that I couldn't resist writing up a review anyway.
The final chapters had a few examples along the lines of '[weak environmental effect] primes people to be more/less insightful', and I know enough to stay away from those, but the earlier parts look somewhat more solid to me. I haven't made much effort to trace back references, though, and I could easily still be being too credulous.
(I didn't worry so much about replication with my previous post on the Cognitive Reflection Test. Getting the bat and ball question wrong is hardly the kind of weak effect that you need a sensitive statistical instrument to detect. It's almost impossible to stop people getting it wrong! I did steer clear of any more dubious priming-style results, though, like the claim that people do better on the CRT when reading it 'in a disfluent font'.)
Insight and intuition
First, it's worth getting clear on exactly what Kounious and Beeman mean by 'insight'. As they use it, insight is a specific type of creative thinking, which they define more generally as 'the ability to reinterpret something by breaking it down into its elements and recombining these elements in a surprising way to achieve some goal.' Insight is distinguished by its suddenness and lack of conscious control:
Insights tend to have a few other features in common. Solving a problem by insight is normally very satisfying: the insight comes into consciousness along with a small jolt of positive affect. The insight itself is usually preceded by a longer period of more effortful thought about the problem. Sometimes this takes place just before the moment of insight, while at other times there is an 'incubation' phase, where the solution pops into your head while you've taken a break from deliberately thinking about it.
I'm not really going to get into this part in my review, but the related word 'intuition' is also used in an interestingly specific sense in the book, to describe the sense that a new idea is lurking beneath the surface, but is not consciously accessible yet. Intuitions often precede an insight, but have a different feel to the insight itself:
Insight problems
To study insight, psychologists need to come up with problems that reliably trigger an insight solution. One classic example discussed in The Eureka Factor is the Nine Dot Problem, where you are asked to connect the following 3 by 3 grid of black dots using only four lines, without retracing or taking your pen off the page:
If you've somehow avoided seeing this puzzle before, think about it for a while first. I've put the solution and my discussion of it in a spoiler block below:
A solution can be found in the Wikipedia article on insight problems here. It'll probably look irritatingly obvious once you see it. The key feature of the solution is that the lines you draw have to extend outside the confines of the square of dots you start with (thus spawning a whole subgenre of annoying business literature on 'thinking outside the box'). Nothing in the rules forbids this, but the setup focusses most people's attention on the grid itself, and breaking out of this mindset requires a kind of reframing, a throwing away of artificially imposed constraints. This is a common characteristic of insight problems.
This characteristic also makes insight hard to test. For testing purposes, it's useful to have a large stock of similar puzzles in hand. But a good reframing like the one in the Nine Dot Problem tends to be a bit of a one-off: once you've had the idea of extending the lines outside the box, it applies trivially to all similar puzzles, and not at all to other types of puzzle.
(I talked about something similar in my last post, on the Cognitive Reflection Test. The test was inspired by one good puzzle, the 'bat and ball problem', and adds two other questions that were apparently picked to be similar. Five thousand words and many comments later, it's not obvious to me or most of the other commenters that these three problems form any kind of natural set at all.)
Kounios and Beeman discuss several of these eyecatching 'one-off' problems in the book, but their own research that they discuss is focussed on a more standardisable kind of puzzle, the Remote Associates Test. This test gives you three words, such as
PINE CRAB SAUCE
and asks you to find the common word that links them. The authors claim that these can be solved either with or without insight, and asked participants to self-categorise their responses as either fitting in the 'insightful' or 'analytic' categories:
This categorisation seems suspiciously neat, and if I rely on my own introspection for solving one of these (which is obviously dubious itself) it feels like more of a mix. I'll often generate some verbal noise about cakes and trees that sounds vaguely like I'm doing something systematic, but the main business of solving the thing seems to be going on nonverbally elsewhere. But I do think there's something there – the answer can be very immediate and 'poppy', or it can surface after a longer and more accessible process of trying plausible words. This was tested in a more objective way by seeing what people do when they don't come up with the answer:
Kounios and Beeman's research focussed on finding neural correlates of the 'aha' moment of insight, using a combination of an EEG test to pinpoint the time of the insight, and fMRI scanning to locate the brain region:
I'm not sure how settled this is, though. I haven't tried to do a proper search of the literature, but certainly a review from 2010 describes the situation as very much in flux:
(The book was published somewhat later, in 2015, but mostly cites research from prior to this review, such as this paper.)
As an outsider it's going to be pretty hard for me to judge this without spending a lot more time than I really want to right now. However, regardless of how this holds up, I was really interested in the authors' discussion of why a right-hemisphere neural correlate of insight would make sense.
Insight and context
One of the authors, Mark Beeman, had previously studied language deficits in people who had suffered brain damage to the right hemisphere. One such patient was the trial attorney D.B.:
An example of the kind of problem D.B. struggled with is the following:
If D.B. was given a statement about something that occurred explicitly in the text, such as 'Joan went to the park on Saturday', he could say whether it was true or false with no problems at all. In fact, he did better than all of the control subjects on these sorts of explicit questions. But if he was instead presented with a statement like 'Joan cut her foot', where some of the facts are left implicit, he was unable to answer.
This was interesting to me, because it seems so directly relevant to the discussion last year on 'cognitive decoupling'. This is a term I'd picked up from Sarah Constantin, who herself got it from Keith Stanovich:
The patients in Beeman's study have so much difficulty with contextualisation that they struggle with anything at all that is left implicit, even straightforward inferences like 'Joan cut her foot'. This appears to match with other evidence from visual half-field studies, where subjects are presented with words on either the right or left half of the visual field. Those on the left half will go first to the right hemisphere, so that the right hemisphere gets a head start on interpreting the stimulus. This shows a similar difference between hemispheres:
Why would picking up on these weak associations be relevant to insight? The story seems to be that this tangle of secondary meanings - the 'Lovecraftian penumbra of monstrous shadow phalanges' - works to pull your attention away from the obvious interpretation you're stuck with, helping you to find a clever new reframing of the problem.
This makes a lot of sense to me as a rough outline. In my own experience at least, the kind of thinking that is likely to lead to an insight experience feels softer and more diffuse than the more 'analytic' kind, more a process of sort of rolling the ideas around gently in your head and seeing if something clicks than a really focussed investigation of the problem. 'Thinking too hard' tends to break the spell. This fits well with the idea that insights are triggered by activation of weak associations.
Final thoughts
There's a lot of other interesting material in the book about the rest of the insight process, including the incubation period leading up to an insight flash, and the phenomenon of 'intuitions', where you feel that an insight is on its way but you don't know what it is yet. I'll never get through this review if I try to cover all of that, so instead I'm going to finish up with a couple of weak associations of my own that got activated while reading the book.
I've been getting increasingly dissatisfied with the way dual process theories split cognition into a fast/automatic/intuitive 'System 1' and a slow/effortful/systematic 'System 2'. System 1 in particular has started to look to me like an amorphous grab bag of all kinds of things that would be better separated out.
The Eureka Factor has pushed this a little further, by bringing out a distinction between two things that normally get lumped under System 1 but are actually very different. One obvious type of System 1-ish behaviour is routine action, the way you go about tasks you have done many times before, like making a sandwich or walking to work. These kinds of activities require very little explicit thought and generally 'just happen' in response to cues in the environment.
The kind of 'insightful' thinking discussed in The Eureka Factor would also normally get classed under System 1: it's not very systematic and involves a fast, opaque process where the answer just pops into your head without much explanation. But it's also very different to routine action. It involves deliberately choosing to think about a new situation, rather than one you have seen many times before, and a successful insight gives you a qualitatively new kind of understanding. The insight flash itself is a very noticeable, enjoyable feature of your conscious attention, rather than the effortless, unexamined state of absorbed action.
This was pointed out to me once before by Sarah Constantin, in the comments section of her Distinctions in Types of Thought:
I'd sort of had this at the back of my head since then, but the book has really brought out the distinction clearly. I'm sure these aren't the only types of thinking getting shoved into the System 1 category, and I get the sense that there's a lot more splitting out that I need to do.
I also thought about how the results in the book fit in with my perennial 'two types of mathematician' question. (This is a weird phenomenon I've noticed where a lot of mathematicians have written essays about how mathematicians can be divided into two groups; I've assembled a list of examples here.) 'Analytic' versus 'insightful' seems to be one of the distinctions between groups, at least. It seems relevant to Poincaré’s version, for instance:
In fact, Poincaré once also gave a striking description of an insight flash himself:
If the insight/analysis split is going to be relevant here, it would require that people favour either 'analytic' or 'insight' solutions as a general cognitive style, rather than switching between them freely depending on the problem. The authors do indeed claim that this is the case:
This is based on their own research where they recorded participant's self-report of whether they were using a 'insight' or 'analytic' approach to solve anagrams, and compared it with EEG recordings of their resting state. They found a number of differences, including more right-hemisphere activity in the 'insight' group, and lower levels of communication between the frontal lobe and other parts of the brain, indicating a more disorderly thinking style with less top-down control. This may suggest more freedom to allow weak associations between thoughts to have a crack at the problem, without being overruled by the dominant interpretation.
Again, and you're probably got very bored of this disclaimer, I have no idea how well the details of this will hold up. That's true for pretty much every specific detail in the book that I've discussed here. Still, the link between insight and weak associations makes a lot of sense to me, and the overall picture certainly triggered some useful reframings. That seems very appropriate for a book about insight.