In my summary of Consciousness and the Brain (Dehaene, 2014), I briefly mentioned that one of the functions of consciousness is to carry out artificial serial operations; or in other words, implement a production system (equivalent to a Turing machine) in the brain.

While I did not go into very much detail about this model in the post, I’ve used it in later articles. For instance, in Building up to an Internal Family Systems model, I used a toy model where different subagents cast votes to modify the contents of consciousness. One may conceptualize this as equivalent to the production system model, where different subagents implement different production rules which compete to modify the contents of consciousness.

In this post, I will flesh out the model a bit more, as well as applying it to a few other examples, such as emotion suppression, internal conflict, and blind spots.

Evidence accumulation

Dehaene has outlined his model in a pair of papers (Zylberberg, Dehaene, Roelfsema, & Sigman, 2011; Dehaene & Sigman, 2012), though he is not the first one to propose this kind of a model. Daniel Dennett’s Consciousness Explained (1991) also discusses consciousness as implementing a virtual Turing machine; both cite as examples earlier computational models of the mind, such as Soar and ACT, which work on the same principles.

An important building block in Dehane’s model is based on what we know about evidence accumulation and decision-making in the brain, so let’s start by taking a look at that.

Sequential sampling models (SSMs) are a family of models from mathematical psychology that have been developed since the 1960s (Forstmann, Ratcliff, & Wagenmakers, 2016). A particularly common SSM is the diffusion decision model (DDM) of decision-making, in which a decision-maker is assumed to noisily accumulate evidence towards a particular choice. Once the evidence in favor of a particular choice meets a decision threshold, that choice is taken.

For example, someone might be shown dots on a screen, some of which are moving in a certain direction. The task is to tell which direction the dots are moving in. After the person has seen enough dot movements, they will have sufficient confidence to make their judgment. The difficulty of the task can be precisely varied by changing the proportion of moving dots and their speed, making the movement easier or harder to detect. One can then measure how such changes affect the time needed for people to make a judgment. 

A DDM is a simple model with just four parameters:

  • decision threshold: a threshold for the amount of evidence in favor of one option which causes that option to be chosen
  • starting point bias: a person may start biased towards one particular alternative, which can be modeled by them having some initial evidence putting them closer to one threshold than the other
  • drift rate: the average amount of evidence accumulated per time unit
  • non-decision time: when measuring e.g. reaction times, a delay introduced by factors such as perceptual processing which take time but are not involved in the decision process itself

These parameters can be measured from behavioral experiments, and the model manages to fit a wide variety of behavioral experiments and intuitive phenomena well (Forstmann et al., 2016; Ratcliff, Smith, Brown, & McKoon, 2016; Roberts & Hutcherson, 2019). For example, easier-to-perceive evidence in favor of a particular option is reflected in a faster drift rate towards the decision threshold, causing faster decisions. On the other hand, making mistakes or being falsely told that one’s performance on a trial is below that of most other participants prompts caution, increasing people’s decision thresholds and slowing down response times (Roberts & Hutcherson, 2019).

While the models have been studied the most in the context of binary decisions, one can easily extend the model to a choice between n alternatives by assuming the existence of multiple accumulators, each accumulating decision towards their own choice, possibly inhibiting the others in the process. Neuroscience studies have identified structures which seem to correspond to various parts of SSMs. For example, in random dot motion tasks, where participants have to indicate the direction that dots on a screen are moving in,

the firing rates of direction selective neurons in the visual cortex (area MT/V5) exhibit a roughly linear increase (or decrease) as a function of the strength of motion in their preferred (or anti-preferred) direction. The average firing rate from a pool of neurons sharing similar direction preferences provides a time varying signal that can be compared to an average of another, opposing pool. This difference can be positive or negative, reflecting the momentary evidence in favor of one direction and against the other. (Shadlen & Shohamy, 2016)

Shadlen & Shohamy (2016) note that experiments on more “real-world” decisions, such as decisions on which stock to pick or which snack to choose, also seem to be compatible with an SSM framework. However, this raises a few questions. For instance, it makes intuitive sense why people would take more time on a random motion task when they lose confidence: watching the movements for a longer time accumulates more evidence for the right answer, until the decision threshold is met. But what is the additional evidence that is being accumulated in the case of making a decision based on subjective value?

The authors make an analogy to a symbol task which has been studied in rhesus monkeys. The monkeys need to decide between two choices, one of which is correct. For this task, they are shown a series of symbols, each of which predicts one of the choices as being correct with some probability. Through experience, the monkeys come to learn the weight of evidence carried by each symbol. In effect, they are accumulating evidence not by motion discrimination but memory retrieval: retrieving some pre-learned association between a symbol and its assigned weight. This “leads to an incremental change in the firing rate of LIP neurons that represent the cumulative [likelihood ratio] in favor of the target".

The proposal is that humans make choices based on subjective value using a similar process: by perceiving a possible option and then retrieving memories which carry information about the value of that option. For instance, when deciding between an apple and a chocolate bar, someone might recall how apples and chocolate bars have tasted in the past, how they felt after eating them, what kinds of associations they have about the healthiness of apples vs. chocolate, any other emotional associations they might have (such as fond memories of their grandmother’s apple pie) and so on.

Shadlen & Shohamy further hypothesize that the reason why the decision process seems to take time is that different pieces of relevant information are found in physically disparate memory networks and neuronal sites. Access from the memory networks to the evidence accumulator neurons is physically bottlenecked by a limited number of “pipes”. Thus, a number of different memory networks need to take turns in accessing the pipe, causing a serial delay in the evidence accumulation process.


The biological Turing machine

In Consciousness and the Brain, Dehaene considers the example of doing arithmetic. Someone who is calculating something like 12 * 13 in their head, might first multiply 10 by 12, keep the result in memory, multiply 3 by 12, and then add the results together. Thus, if a circuit in the brain has learned to do multiplication, consciousness can be used to route its results to a temporary memory storage, with those results then being routed from the storage to a circuit that does addition.

Production systems in AI are composed of if-then rules (production rules) which modify the contents of memory: one might work by detecting the presence of an item like “10 * 12” and rewriting it as “120”. On a conceptual level, the brain is proposed to do something similar: various contents of consciousness activate neurons storing something like production rules, which compete to fire. The first one to fire gets to apply its production, changing the contents of consciousness.

If I understand Deheane’s model correctly, he proposes to apply the neural mechanisms discussed in the previous sections - such as neuron groups which accumulate evidence towards some kind of decision - at a slightly lower level. In the behavioral experiments, there are mechanisms which accumulate evidence towards which particular physical actions to take, but a person might still be distracted by unrelated thoughts while performing that task. Dehaene’s papers look at the kinds of mechanisms choosing what thoughts to think. That is, there are accumulator neurons which take “actions” to modify the contents of consciousness and working memory.

We can think of this as a two-stage process:

  1. A process involving subconscious “decisions” about what thoughts to think, and what kind of content to maintain in consciousness. Evidence indicating the kind of conscious content is most suited for the situation is in part based on hardwired priorities, and in part stored associations about the kinds of thoughts that previously produced beneficial results.
  2. A higher-level process involving decisions about what physical actions to take. While the inputs to this process do not necessarily need to go through consciousness, consciously perceived evidence has a much higher weight. Thus, the lower-level process has significant influence on which evidence gets to the accumulators on this level.

To be clear, this does not necessarily correspond to two clearly distinct levels: Zylberberg, Dehaene, Roelfsema, & Sigman (2011) do not talk about there being any levels, and they suggest that “triggering motor actions” is one of the possible decisions involved. But their paper seems to mostly be focused on actions - or, in their language, production rules - which manipulate the contents of consciousness.

There seems to me to be a conceptual difference between the kinds of actions that change the contents of consciousness, and the kinds of actions which accumulate evidence over many items in consciousness (such as iterative memories of snacks). Zylberberg et al. talk about a “winner-take-all race” to trigger a production rule, which to me implies that the evidence accumulated in favor of each production rule is cleared each time that the contents of consciousness is changed. This is seemingly incompatible with accumulating evidence over many consciousness-moments, so postulating a two-level distinction between accumulators seems like a straightforward way of resolving the issue.

[EDIT: Hazard suggests that the two-level split is implemented by the basal ganglia carrying out evidence accumulation across changes in conscious content.]

As an aside, I am, as Dehaene is, treating consciousness and working memory as basically synonymous for the purposes of this discussion. This is not strictly correct; e.g. there may be items in working memory which are not currently conscious. However, since it’s generally thought that items in working memory need to be actively rehearsed through consciousness in order to avoid be maintained, I think that this equivocation is okay for these purposes.

Here’s a conceptual overview of the stages in the “biological Turing machine’s” operation (as Zylberberg et al. note, a production firing “is essentially equivalent to the action performed by a Turing machine in a single step”):

1. The production selection stage

At the beginning of a cognitive cycle, a person’s working memory contains a number of different items, some internally generated (e.g. memories, thoughts) and some external (e.g. the sight or sound of something in the environment). Each item in memory may activate (contribute evidence to) neurons which accumulate weight towards triggering a particular kind of production rule. When some accumulator neurons reach their decision threshold, they apply their associated production rule.

In the above image, the blue circles at the bottom represent active items in working memory. Two items are activating the same group of accumulator neurons (shown red) and one is activating an unrelated one (shown brown).

2. Production rule ignition

Once a group of accumulator neurons reach their decision threshold and fire a production rule, the model suggests that there are a number of things that the rule can do. In the above image, an active rule is modifying the contents of working memory: taking one of the blue circles, deleting it, and creating a new blue circle nearby. Hypothetically, this might be something like taking the mental objects holding “120” and “36”, adding them together, and storing the output of “156” in memory.

Obviously, since we are talking about brains, expressions like "writing into memory" or "deleting from memory" need to be understood in somewhat different terms than in computers; something being “deleted from working memory” mostly just means that a neuronal group which was storing the item in its firing pattern stops doing so.

The authors suggest that among other things, production rules can:

  • trigger motor actions (e.g. saying or doing something)
  • change the contents of working memory to trigger a new processing step (e.g. saving the intermediate stage of an arithmetic operation, together with the intention to proceed with the next step)
  • activate and broadcast information that is in a “latent” state (e.g. retrieving a memory and sending it to consciousness)
  • activate peripheral processors capable of performing specific functions (e.g. changing the focus of attention)

3. New production selection

After the winning production rule has been applied, the production selection phase begins anew. At this stage or a future one, some kind of a credit assignment process likely modifies the decision weights involved in choosing production rules: if a particular rule was activated in particular circumstances and seemed to produce positive consequences, then the connections which caused those circumstances to be considered evidence for that rule are strengthened.

Practical relevance

Okay, so why do we care? What is the practical relevance of this model?

First, this helps make some of my previous posts more concrete. In Building up to an Internal Family Systems model, I proposed some sort of a process where different subagents were competing to change the contents of consciousness. For instance, “manager” subagents might be trying to manipulate the contents of consciousness so as to avoid unpleasant thoughts and to keep the person out of dangerous circumstances.

People who do IFS, or other kinds of “parts work”, will notice that different subagents are associated with different kinds of bodily sensations and flavors of consciousness. A priori, there shouldn’t be any particular reason for this… except, perhaps, if the strength of such sensations correlated with the activation of a particular subagent, with those sensations then being internally used for credit assignment to identify and reward subagents which had been active in a given cognitive cycle. (This is mostly pure speculation, but supported by some observations to which I hope to return in a future post.)

In my original post, I mostly talked about exiles - neural patterns blocked from consciousness by other subagents - as being subagents related to a painful memory. But while it is not emphasized as much, IFS holds that other subagents can in principle be exiled too. For example, a subagent which tends to react using anger may frequently lead to harmful consequences, and then be blocked by other subagents. This can easily be modeled using the neural Turing machine framework: over time, the system learns decisions which modify consciousness so as to prevent the activation of a production rule that gives power to the angry subagent. As this helps avoid harmful consequences, this begins to happen more and more often.

Hazard has a nice recent post of this kind of a thing happening with emotions in general:

So young me is upset that the grub master for our camping trip forgot half the food on the menu, and all we have for breakfast is milk. I couldn't "fix it" given that we were in the woods, so my next option was "stop feeling upset about it." So I reached around in the dark of my mind, and Oops, the "healthily process feelings" lever is right next to the "stop listening to my emotions" lever.
The end result? "Wow, I decided to stop feeling upset, and then I stopped feeling upset. I'm so fucking good at emotional regulation!!!!!"
My model now is that I substituted "is there a monologue of upsetness in my conscious mental loop?" for "am I feeling upset?". So from my perspective, it just felt like I was very in control of my feelings. Whenever I wanted to stop feeling something, I could. When I thought of ignoring/repressing emotions, I imagined trying to cover up something that was there, maybe with a story. Or I thought if you poked around ignored emotions there would be a response of anger or annoyance. I at least expected that if I was ignoring my emotions, that if I got very calm and then asked myself, "Is there anything that you're feeling?" I would get an answer.
Again, the assumption was, "If it's in my mind, I should be able to notice if I look." This ignored what was actually happening, which was that I was cutting the phone lines so my emotions couldn't talk to me in the first place.

Feeling upset feels bad, ceasing to feel upset feels good. Brain notices that there is some operation which causes the feeling of upset to disappear from consciousness: carrying out this operation also produces a feeling of satisfaction in the form of “yay, I’m good at emotional regulation!”. As a result of being rewarded, it eventually becomes so automatic as to block even hints of undesired emotions, making the block in question impossible to notice.

Another observation is that in IFS as well as in Internal Double Crux, an important mental move seems to be “giving subagents a chance to finish talking”. For instance, subagent A might hold a consideration pointing in a particular direction, while subagent B holds a consideration in the opposite direction. When A starts presenting its points, B interrupts with its own point; in response, A interrupts with its point. It seems to be possible to commit to not taking a decision before having heard both subagents, and having done that, ask them to take turns presenting their points and not interrupt each other. What exactly is going on here?

Suppose that a person is contemplating the decision, “should I trust my friend to have my back in a particular risky venture”. Subagent A holds the consideration “allies are important, and we don’t have any, we should really trust our friend so that we would have more allies”. Subagent B holds the consideration “being betrayed would be really bad, and our friend seems untrustworthy, it’s important that we don’t sign up for this”. Subagent A considers it really important to go on this venture together; subagent B considers it really important not to.

Recall that human decision-making happens by accumulating evidence towards different choices, until a decision threshold is met. If A were allowed to present its evidence in favor of signing up on the venture, that might sway the decision over the threshold before B was able to present the evidence against. Thus, there is a mechanism which allows B to “interrupt” A, in order to present its own evidence. Unfortunately, it is now A which risks B’s evidence being sufficient to meet a decision threshold prematurely unless B is prevented from presenting its evidence, so A must interrupt.

Subjectively, this is experienced as intense internal conflict, with two extreme considerations pushing in opposite directions, allowing no decision to be made - unless there is a plausible commitment to not making a decision until both have been heard out. (To me, this feels like my attention being caught in a tug-of-war between one set of considerations versus another. Roberts & Hutcherson (2019) note that A large body of work suggests that negative information draws focus through rapid detection [64–68] and attentional capture [69–71]. [....] Several studies now show that attending to a choice alternative or attribute increases its weighting in the evidence accumulation process [72–75]. To the extent that negative affect draws attention to a choice-relevant attribute or object, it should thus increase the weight it receives.)

There’s one more important consideration. Eliezer has written about cached thoughts - beliefs which we have once acquired, then never re-evaluated and just acted on them from that onwards. But this model suggests that things may be worse: it’s not just that we are running on cached thoughts. Instead, even the pre-conscious mechanisms deciding which thoughts are worth re-evaluating are running on cached values.

Sometimes external evidence may be sufficient to force an update, but there can also be self-fulfilling blind spots. For instance, you may note that negative emotions never even surface into your consciousness. This observation then triggers a sense of satisfaction about being good at emotional regulation, so that thoughts about alternative - and less pleasant - hypotheses are never selected for consideration. In fact, evidence to the contrary may feel actively unpleasant to consider, triggering subagents which use feelings such as annoyance - or if annoyance would be too suspicious, just plain indifference - to push that evidence out of consciousness, before it can contribute to a decision.

And the older those flawed assumptions are, the more time there is for additional structures to build on top of them.

This post is part of research funded by the Foundational Research Institute. Thanks to Maija Haavisto for line editing an initial draft.

References

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the Brain Codes Our Thoughts. New York, New York: Viking.

Dehaene, S., & Sigman, M. (2012). From a single decision to a multi-step algorithm. Current Opinion in Neurobiology, 22(6), 937–945.

Dennett, D. C. (1991). Consciousness Explained (1st edition). Boston: Little Brown & Co.

Forstmann, B. U., Ratcliff, R., & Wagenmakers, E.-J. (2016). Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions. Annual Review of Psychology, 67, 641–666.

Ratcliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffusion Decision Model: Current Issues and History. Trends in Cognitive Sciences, 20(4), 260–281.

Roberts, I. D., & Hutcherson, C. A. (2019). Affect and Decision Making: Insights and Predictions from Computational Models. Trends in Cognitive Sciences, 23(7), 602–614.

Shadlen, M. N., & Shohamy, D. (2016). Decision Making and Sequential Sampling from Memory. Neuron, 90(5), 927–939.

Zylberberg, A., Dehaene, S., Roelfsema, P. R., & Sigman, M. (2011). The human Turing machine: a neural framework for mental programs. Trends in Cognitive Sciences, 15(7), 293–300.

New Comment
3 comments, sorted by Click to highlight new comments since:

Responding to your Dehaene book review and IFS thoughts as well as this:

On Dehaene: I read the 2018 version of Dehaene's Consciousness and the Brain a while ago and would recommend it as a good intro to cognitive neurosci, your summary looks correct.

On meditation: it's been said before, but >90% of people reading this are going to be high on "having models of how their brain works", and low on "having actually sat down and processed their emotions through meditation or IFS or whatevs". Double especially true for all the depressed Berkeley rationalists.

Oh, and fun thing: surely you've heard the idea that "pretty much all effective therapy and meditation and shit is just helping people sit down until they process their emotions instead of running from them like usual". Well, here's IFS being used that way, see from 4:51-5:32.

The production rule model is interesting to me in that it fits well with Michael Commons' notion of how developmental psychology works. Specifically, Commons has a formal version of his theory that looks a lot like what developmental psychology is about is how humans learn new ways to perform more "complex" production rules in that they are the same sort of rules operating on more complex types.

Promoted to curated: I really like this sequence, and I think this post might actually be one of the best in the sequence, due to the degree to which it tries really hard to create concrete technical models of how the human mind works (which I think also makes it a bit less accessible than some of the other posts in the sequence).