(previous title: Very low cognitive load) 

 

Trusting choices made by the same brain that turns my hot 9th grade teacher into a knife-bearing possum at the last second every damn night.

Sean Thomason

 

We can't trust brains when taken as a whole. Why should we trust their subareas?

 

Cognitive load is the load related to the executive control of working memory. Depending on what you are doing, the more parallel/extraneous cognitive load you have, the worse you'll do it. (The process may be the same as what the literature calls "Ego Depletion" or "system 2 depletion", the jury is still up on that)

If you go here and enter 0 as lower limit and 1.000.000 as upper limit, and try to keep the number in mind until you are done reading post and comments, you'll get a bit of load while you read this post. 

Now you may process numbers verbally, visually, or both. More generally, for anything you keep in mind, you are likely allocating it in a part of the brain that is primarily concerned with a sensory modality, so it will have some "flavour","shape", "location", "sound", or "proprioceptual location". It is harder to consciously memorize things using odours, since those have shortcuts within the brain. 

 

Let us in turn examine two domains in which understanding cognitive load can help you win: Moral Dilemmas and Personal Policy

 

Moral Games/Dilemmas

In Dictator game (you're given $20 and you can give any amount to a stranger and keep the rest) the effect of load is negligible.

In the tested versions of the Trolley problems (kill/indirectly kill/let die one to save five) people are likely to become less utilitarian when under non-visual load. It is assumed that higher functions of the brain (in VMPF cortex) - which integrate higher moral judgement with emotional taste buttons - fails to integrate, making the "fast thinking", emotional mode be the only one reacting.

Visual information about the problem brings into salience the gory aspect of killing someone, and other lower level features that incline non-utilitarian decisions. So when visual load requires you to memorize something else, like a bird drawing, you become more utilitarian since you fail to visualize the one person being killed (which we do more than the five) in as much gory detail. (Greene et al,2011)

(Bednar et al.2012) show that when playing two games simultaneously, the strategy of one spills over to the other one. Critically, heuristics that are useful for both games were used, increasing the likelihood that those heuristics will be suboptimal in each case. 

In altruistic donation scenarios, with donations to suffering people at stake, (Small et al. 2007) more load increased scope insensitivity, so less load made the donation more proportional to how many people are suffering. Contrary to load, priming increases the capacity of an area/module, by using it and not keeping the information stored, leaving free usable space. (Dickert et al.2010) shows that priming for empathy increases donation amount (but not decision to donate), whereas priming calculation decreases it.

Taken together, these studies indicate that to make people donate more it is most effective to, after being primed for thinking about how they will feel about themselves, and for empathic feelings, make them feel empathically and non-visually someone from their own race. After all that you make them keep a number and a drawing in mind, and this is the optimal time to donate.

Personal Policy

If given a choice between a high carb food, and a low carb one, people undergoing diets are substantially more likely to choose the high carb one if they are keeping some information in mind.

Forgetful people, and those with ADHD know that, for them, out of sight means out of mind. Through luck, intelligence, blind error or psychological help, they learn to put things, literally, in front of them, to avoid 'losing them' in their minds corner somewhere. They have a lower storage size for executive memory tasks.

Positive psychologists advise us to make our daily tasks, specially the ones we are always reluctant to start, in very visible places. Alternatively, we can make the commitment to start them smaller, but this only works if we actually remember to do them.

Marketing appropriates cognitive load in a terrible way. They know if we are overwhelmed with information, we are more likely to agree. They'll inform us more than what we need, and we aren't left with enough brain to decide well. One more reason to keep advertisement out of sight and out of mind.

 

Effective use of Cognitive Load

Once you understand how it works, it is simple to use cognitive load as a tool:

1)Even if your executive control of activities is fine, externalize as much as you can, by using a calendar and alarms to tell you everything you need to do.

2)Do apparently mean things to donors like the above suggestion.

3)When in need of moral empathy, type 1, fast, emotional buttons system, keep numerical and verbal things (like phone numbers) in mind while deciding.

4)When in need of moral utilitarianism, highjack the taste buttons, automatic, type 1 system, by giving yourself an emotional experience more proportional to the numbers  - for instance, when reasoning about the trolley problem, think about each of the five, or pinch yourself with a needle five times prior to deciding.

5)When in need of more cognitive calculating capacity, besides having freed yourself from executive tasks, use natural sensory modalities to keep stuff in mind, not only the classic castle mnemonics (spacial location), but put the chunks of information in different parts of your body (proprioception), associate them with textures (Feynman 1985), shapes, and actions.


If practising this looks sometimes unnecessary, or immoral, we can remember Max Tegmark's gloomy assessment of Science's pervasiveness (or lack thereof) at the Edge 2011 question. When discussing the dishonesty and marketing of opponents and defenders of facts/Science, he says: 
Yet we scientists are often painfully naive, deluding ourselves that just because we think we have the moral high ground, we can somehow defeat this corporate-fundamentalist coalition by using obsolete unscientific strategies. Based of what scientific argument will it make a hoot of a difference if we grumble "we won't stoop that low" and "people need to change" in faculty lunch rooms and recite statistics to journalists? 

We scientists have basically been saying "tanks are unethical, so let's fight tanks with swords".

 

To teach people what a scientific concept is and how a scientific lifestyle will improve their lives, we need to go about it scientifically:

We need new science advocacy organizations which use all the same scientific marketing and fundraising tools as the anti-scientific coalition.
We'll need to use many of the tools that make scientists cringe, from ads and lobbying to focus groups that identify the most effective sound bites.
We won't need to stoop all the way down to intellectual dishonesty, however. Because in this battle, we have the most powerful weapon of all on our side: the facts.

 

We'd better start pushing emotional buttons and twisting the mental knobs of people if we want to get something done. Starting with our own.

New to LessWrong?

New Comment
20 comments, sorted by Click to highlight new comments since: Today at 3:37 PM

We'd better start pushing emotional buttons and twisting the mental knobs of people if we want to get something done. Starting with our own.

This sounds awfully like endorsing the use of Dark Arts to counter the same. Not that I'd throw it out out of hand, but wouldn't it be better to find a way to reduce the effectiveness of said arts to begin with? It seems to me that's the primary purpose of most of the Sequences, in fact.

I think Tegmark's claim is unequivocally that we should endorse Dark Artsy subsets of scientific knowledge to promote science and whatever needs promotion (rationality perhaps). So yes, the thing being claimed is the thing you are emotionally inclined to fear/dislike. By him and by me.

Though just to be 100% sure, I'd like to have a brief description of your meaning of "dark arts" to avoid the double transparency fallacy.

The post is endorsing the use of the Dark Arts. From a purely deontological perspective, that's objectionable. From a virtue ethics perspective, it could be seen as stooping (close) to the level of our enemies. From a consequentialist perspective, we need to compare the harm done by using them against the benefits.

To make that comparison, we need to determine what harm the Dark Arts, in and of themselves, cause. It seems to me (though I could certainly be convinced otherwise) that essentially all the harm they comes from their use in convincing people to believe falsehoods and to do stupid things. Does anyone have any significant examples of the Dark Arts being harmful independent of what they're being used to convince people of?

Does anyone have any significant examples of the Dark Arts being harmful independent of what they're being used to convince people of?

Dark Arts have externalities. Once you become known as a skilled manipulator fewer people are going to trust you and fewer people you can influence in the long run. Using Dart Arks is a Prisoner's dilemma defection with all associated problems - a world full of Dark Artists is worse than a world full of honest truth sayers, ceteris paribus. Heavy use of Dark Arts may be risky for the performer himself and compromise his own rationality, as it is much easier to use a manipulative technique persuasively if one believes no deception is happening.

These aren't actually examples, but it's hard to come up with a specific example under "independent of what they're being used to" clause.

Once you become known as a skilled manipulator fewer people are going to trust you and fewer people you can influence in the long run.

This is not what I have observed in practice.

Once you become known as a skilled manipulator fewer people are going to trust you and fewer people you can influence in the long run.

The very long run, perhaps.

In the shorter run of, say, 10-100 years, it isn't in the least clear to me that the advantage of being considered (accurately or not) a skilled manipulator, in terms of the willingness of powerful agents to ally with me, is fully offset (let alone overpowered) by the disadvantage of it, in terms of people being less influenceable by me. Add to that the advantages of actually being a skilled manipulator, and that's even less clear.

Admittedly, if I anticipate having a significantly longer effective lifespan than that, I may prefer not to risk it.

Once you become known as a skilled manipulator fewer people are going to trust you and fewer people you can influence in the long run.

But it seems that people who use the Dark Arts profit from it. If the Dark Arts were self-defeating as you suggest, we wouldn't be having this discussion.

Using Dart Arks is a Prisoner's dilemma defection with all associated problems - a world full of Dark Artists is worse than a world full of honest truth sayers, ceteris paribus.

Continuing to cooperate in a world where most players defect is a poor strategy. I also doubt that it strongly influences the defectors to stop defecting.

[-][anonymous]11y40

We can't trust brains when taken as a whole.

We are made of brains. A nice swirl off to the side of the brain mistrusting the brain is mistrusting that mistrust.

True, but so what? It's still not trustworthy.

The "so what" is to beware the skepticism fallacy: The notion that, if you always set your credence to "very low", then you have attained the proper level of belief in everything, and so you have discharged your duty to be rational.

[-][anonymous]11y00

Mistrust of mistrust means not occasional possible trustworthiness, but occasional actuated trustworthiness. My brain being trustworthy on occasion is not a so-what conclusion for me. Out of that comes attempts to identify when those occasions might be and when they are not happening but appear to be happening. I'm using the flaws to identify the strengths.

'My brain always trusts my brain to never be trustworthy' - I think that is what EY just said, but I could be mistaken.

[-][anonymous]11y00

Is there anything, except brains, that is (non-metaphorically) trustworthy? Nothing else in the universe has any care for the truth.

If we have nothing more seaworthy than an old rotten canoe, the canoe doesn't thereby become a safe means of sailing.

[-][anonymous]11y20

Point taken, but if the only oceangoing thing we've ever encountered or, in any real detail imagined, is this old rotten canoe, one might be excused for finding the notion that 'this canoe is not seaworthy' a little strange. At that point, we don't even have reason to think that seaworthyness admits of variation, nor do we have any way of disentangling the capacities of this canoe from properties of the ocean.

Though perhaps I've gotten this backwards. Maybe 'the brain is not trustworthy' is intended to be metaphorical language.

one might be excused

This doesn't seem like a relevant concern.

[-][anonymous]11y20

I'm sorry, I was speaking elliptically. I meant that your canoe metaphor is misleading, because you're suggesting a world in which the only seagoing vessel I know of is this canoe, while at the same time trading on my actual knowledge of much more seaworthy vessels. This is a problem, given that my whole point is 'what meaning can a term like 'trustworthyness' have if we deny generally to the only thing capable of being trustworthy?'

But I think I've decided to take Neotenic, and EY's comment as a metaphor, so I drop my objection.

By "trustworthiness" I understand something like probability of error, or accuracy or results, just as "seaworthiness" refers to capability of surviving trips of given difficulty. These properties don't depend on availability of better tools, and so absence of better tools is not a relevant consideration in deciding the state of these properties. The absence of better tools might mislead one to overestimate the quality of available tools, but now that we've noticed that, let's stop being misled.

[-][anonymous]11y00

These properties don't depend on availability of better tools

The properties themselves do not, but that's not the problem. Our ability to identify errors in our reasoning hangs on our ability to get that very reasoning right at some point. And getting it right some of the time isn't enough; we have to know that we got it right in order to know that we previously made an error. Since all we have are brains, we can only say that brains are untrustworthy if some other brains, or the same brain at some other time, are trustworthy (not just correct).

What I mean is that the idea of 'trustworthyness' only has meaning in the sentence 'brains in general are untrustworthy' if that sentence is false. Some brains must be trustworthy some of the time, or else we'd never know the difference. EDIT: And in fact everything we know about trustworthyness, we learned from trustworthy brains.

We can of course wish that brains in general were more trustworthy than they are and that's what I take the original comment to mean.

Our ability to identify errors in our reasoning hangs on our ability to get that very reasoning right at some point.

Careful with "identify" there. If I come up with a proof that 1=2, I can recognize it's not right without thereby also knowing which step is wrong.

[-][anonymous]11y00

Good point, though it remains that in order to identify 1=2 as an error, we have to be trustworthy in some respect. But you're right that we don't have to get that very bit of reasoning right just to know that we got it wrong.