Related to: Truth seeking is motivated cognition and Contingency is not arbitrary, Do Sufficiently Advanced Agents Use Logic?

I have a weird belief, it's both controversial and obscure:

  1. Motivated Cognition (MC) has a serious chance to be a good epistemology.
  2. Even if MC is wrong, it's very useful for goals which are not about predicting reality. And MC is a path to ideas which don't require MC, but which are hard to introduce without it.
  3. Properly knowing the idea of MC and knowing how to apply it makes any person ~2 times smarter, more perceptive and understanding.
  4. There is an unexpected duality between Bayesian reasoning and MC. I'm not confident in this part of my belief, but I think this should be true at least to some degree.

I want to have a rational and semi-rational discussion about MC. "Semi-rational" meaning you're ready to suspend some judgement to fully appreciate the idea. If this idea is true it's as important as the Scientific method or the Bayes' theorem, so the stakes are very high. The post contains all cruxes of my belief, so I'm ready for a double-crux. Sorry if I write too arrogantly in a couple of places in the post, I just want to emphasize my point.

I don't want to "shock" people with my belief. I know, it doesn't sound nice, but it's original and doesn't hurt to entertain. And I realize that it also sounds too ambitious: what's that part about "2 times smarter", how can I know that and what does that even mean? I hope my post will be able to clarify this.

I believe that at the very least analyzing MC could help us understand people who criticize and disagree with rationalist community. I think this is by itself is important enough.


Intro: what is Motivated Cognition?

Two types of MC

Let's differentiate two types of motivated cognition:

  1. Local wishful thinking is about random wishes that don't take into account facts. "I want to fly, so I jump off of a cliff towards injury and death."
  2. Global motivated cognition (MC) is about your most important wish + facts.

Logical thinking is about "logical inferences + facts", MC is about "wishes + facts".

You can't use logic without all facts and all inferences. The same way you shouldn't use MC without all facts and all wishes.

How can MC possibly work?

Note that typical criticisms of MC may be unfair:

Reality doesn't care about your wishes.

Reality doesn't care about any thinking method.

You can't know about reality without investigating it.

Wishes can be caused by investigating reality. Also, your "wishing mechanism" is an unexplored part of reality.

Reasons why MC can work without magic:

  1. Maybe it just happens to work in our world, by a weird accident. MC could still count as empiricism and have objective truth. It wouldn't be too crazy.
  2. Maybe MC is equivalent to logical reasoning. Maybe MC is a complicated consequence of logical rules + basic facts about our world.
  3. Maybe you can "exploit" human psychology, i.e. learn a bunch of true facts about the world by knowing some psychological facts and filtering people's opinions.

In this post I give informal arguments for why 1 or 2 or 3 may be the case.

Example: a MC-based belief

Here's an example of a MC-based belief:

I believe that in some important way (most) people have equal intelligence.

This belief is based on fundamental enough wish: a world where certain people can't exist on certain levels of intelligence would be very weird and sad.

Components of the belief:

  • On one hand, it's simply a description of what I'm interested in: I'm interested in the core of intelligence which is shared among all people. It's a bit like a tautology, as if I'm defining general intelligence as something which everyone shares.
  • On the other hand, it's truly a belief, something that eventually leads to different expectations. And it can be refuted by reality (e.g. if certain individuals have a lot of "general intelligence" skills absent in other people).
  • I'm agnostic if my belief can or can't be expressed in Bayesian epistemology: maybe I just have different priors and different reference classes than people who disagree with me. I don't really know if my belief is MC-based or not.
  • I can imagine a possible world where people are equally smart/their intelligence is equally effective. A world which rewards people more often and more diversely. Something like "no free lunch theorem" world, where there's a lot of puzzles for different types of people. Such world is deeply important to me, so it feels as if just it's logical possibility should matter.

Note that the same reasoning doesn't apply to the belief "God exists" for a couple of reasons: (1) it's a weird belief, because God is supposed to exist "beyond" our world (2) we have too great evidence of absence for God (3) you can argue that existence of God is not a fundamental wish, wishes about people should come first.


Part 1: Motivated Cognition is interesting

This part of the post explains why Motivated Cognition is interesting to study. Even if it's wrong.

Ethics

Ethics is an example of solid MC.

For example, we don't kill each other because we don't want to.

But that doesn't mean it's an arbitrary wish and we can just start wanting otherwise.

Politics

Many people link politics to MC.

But people do this in caricature ways, without actually analyzing MC in general.

Human biases

MC is considered to be one of the human biases, obviously.

And yet people don't actually think through what MC is, how it could work and what it implies.

Legal reasoning and a true crime example.

The whole point of making laws and arguing in court is to reverse-engineer paths to conclusions our society wants to reach.

Physics

MC plays a big role in Physics (of all fields) in the shape of pursuing beauty.

The Truth About Beauty in Physics

Religion

MC gets linked to religion, this is obvious too. The link brings up four types of questions: who would actually want to believe in [religion X], is it convenient? what does a believer allowed to want as the best possible thing [e.g. "paradise"]? what can God want? what are reasons to not want God to exist?

I don't know how much of it was explored, given that people don't treat MC seriously.

Philosophy

Expected utility (Pascal Wager)

The idea of expected utility can sometimes give justification to MC.

See: Pascal's wager, Pascal's mugging, Buddhist wager argument for rebirth and Atheist's wager.

I'm confused why this topic barely went beyond religion.

Anthropics

Anthropic reasoning is sometimes related to MC in a roundabout and very weak way.

Example 1. If you want to live in a paradise, make sure that you can be born only in a paradise.

Example 2. If parallel worlds exist, then there are versions of you that are infinitely lucky. And versions that are infinitely unlucky (agonizing minds that are barely alive).

Induction

Universal prior may be affected by beings in different universes who pursue their own goals. This is a very strange example of "motivated" cognition.

Rationality

Logical decision theory

In Logical decision theory you decide what logical facts are true based on profit.

It is related to MC, even if it's not meant to be.

Modest epistemology

You could say that rejecting modest epistemology is an epistemological decision based on MC. You agree to the risk of being wrong in order to use your brain to the fullest.

In general, you can argue that logical reasoning just can't handle all the (meta-)epistemological problems. Parallel universes, infinities, mugging, modesty... at some point you just have to say "I want to live a certain life. So I'm going to assume I can live it. I'm not going to keep entertaining the possibility that I'm a Boltzman brain or that a random mugger is a Matrix Lord. Even if I have to violate Bayesian updating."

Wishes are not trivial pt. 1

Rational community knows that desire for immortality or "infinite pleasure" are not trivial topics.

Stories about evil genies and monkey paws teach us that wishes are not trivial.

Ethical paradoxes teach us that wishes are not trivial. Orthogonality Thesis (AI) is about wishes and it contradicts our intuition.

So, at one hand we laugh at MC, but on the other hand we are deeply confused about wishes because they are as complicated as rocket science. In my opinion it's almost doublethink.

Wishes are not trivial pt. 2

Reality teaches us that wishes are not trivial.

A lot of people die tragically because the bad guys simply fail to want good things for themselves.

There was a time the bad guys were just kids who wanted good things... but at some point the "wishing mechanism" got broken. And now innocent people die because the bad guys pursue some abstractions such as "infinite wealth", "infinite power" or "infinitely conservative values". People turn into paperclip maximizers.

I don't understand how people can laugh at MC while knowing this. Even simply wishing something actually good for yourself is not trivial.

Conclusion

So, if you study ethics, politics, legal systems, physics, religion, human biases or philosophy it would be rational for you to be interested in Motivated Cognition in general.

MC is also related to skills. When you learn to move your body you learn to balance wishes and facts. The same thing when you learn to make and execute plans. When you play chess you often have to play expecting the opponent to make a mistake, i.e. expecting an illogical thing to happen: fake it till you make it.


Part 2: MC and communication

This part is about the way Motivated Cognition could help us in communication. Even if it's wrong. MC helps me immensely and I just can't imagine living without it.

MC helps me understand and remember opinions

My goal here is not to accuse people of wishful thinking. I'm just saying I use MC to remember and model opinions of others.

Roger Penrose

Roger Penrose has a pretty complicated theory (Orchestrated objective reduction). It's about consciousness, but also about purely mathematical topics and about two physical theories (quantum mechanics + general relativity). I would have a hard time remembering the theory. But I know an easy trick to remember it:

  • Imagine the best possibility (for humans) consistent with today's physics. Imagine the best (for humans) mathematical facts.

You automatically get Penrose's theory.

Eliezer Yudkowsky

Eliezer Yudkowsky has a pretty complicated opinion about ethics (Morality as Fixed Computation). It's about algorithms, but also about probability, math and epistemology. People who know much more about math and algorithms than me struggle to understand Eliezer's idea. However, I know an easy trick to make everything clear:

  • Imagine the best (for humans) version of ethics consistent with Eliezer's simpler beliefs. Related to Eliezer's interests.
  • Imagine the best version of ethics toned down by Eliezer's "pessimism".

You get something like "ethical statements are metaphysical (algorithms), they have the most interesting epistemological properties of everything (math, probability and counterfactual worlds)". Now I can recite Eliezer's opinion even in my sleep.

MC and philosophies

Immanuel Kant

Can you easily remember Immanuel Kant's philosophy in enough detail? I bet many of you can't, especially if you disagree with Kant and don't like philosophy in general. I mean, his philosophy is massive, we have A LOT of ground to cover. However, MC helps me to simplify things:

  • Our perception is more important than the absolute truth. This is a logical necessity. (That's interesting and good for us.)
  • Morality combines choice and obligation. (That's the most interesting possibility.)
  • Moral law is unconditional. (That's convenient.)
  • Morality = logic. Being a bad guy is simply inconsistent. (That's convenient.)
  • There is a mix between a priori and a posteriori knowledge. (That's the most interesting possibility.)
  • Moral law comes from not treating other people as tools. (That's very simple and beautiful.)

That's it, we covered most of it. Simply being smart and being an optimist gives you a very good approximation of Kant's philosophy.

Gottfried Leibniz

Check this out: Monadology. Simple substances, fractals, "causality doesn't exist" and God. Do you understand everything, does everything click together? I may be mistaken, but this is obscure even in the field of philosophy. However, we have MC to simplify everything:

  • Monism ("everything is a single thing") is convenient. As is idealism.
  • Fractals are convenient (because they are the same on all levels).
  • Causality is not convenient, it's complicated to keep track of and boring. "Harmony" is a more interesting idea.
  • God is the sum of all important things. That's simple and beautiful.

That's it, we covered everything. Again, being smart and being an optimist (and not being afraid to study esoteric ideas) gives you a good approximation. MC allows you to crunch different philosophies in seconds.

Moreover, using MC you can easily merge Kant's and Leibniz's philosophies. Which sadly hasn't been done and it's a shame. If it had been done, I probably wouldn't even need to explain and defend MC.

Classification

So, MC helps to understand, simplify and classify opinions. Not bad.

MC also makes you curios: what is the point of convergence of different opinions? It definitely exists since we can evaluate at least some opinions as less or more optimistic. What opinion do you get when you don't "tone down" MC by pessimism or specific interests?

But we haven't finished yet. It gets weirder.

MC "predicts" Science

There's a comic by SMBC about Quantum Computing:

https://www.smbc-comics.com/comic/the-talk-3

The moral of the story is supposed to be something like "popular sources are misleading, precise math is the best". (Mom: "For generations physicists had a custom when discussing these matters with outsiders they wanted to avoid being too... graphic. Too explicit. "Gulp" Mathematically precise.")

However, the son's mistaken interpretation... simply sounds less interesting than the truth. I wouldn't want it to be true. If you give true bits and false bits, it's easy to filter out the false bits using MC. Not necessarily from the first try: you can deal with conflicting information using MC.

So, MC even helps me remember Science. Because misconceptions are boring. And if a misconception is truly interesting... then it's worth making.

Language cooperation and MC

Gricean maxims tell us that conversation is about cooperation. This fact lets us skip a lot of information in our speech and yet understand each other perfectly well. Because we have an agreement to tell each other relevant information and not hide anything important.

For me MC is a natural extension of Gricean maxims:

  1. If we discuss (X), we should discuss the best and the most interesting version of (X).
  2. If we think that the best version of (X) is impossible, we should at least mention it anyway.
  3. If you don't even mention the possibility of the best (X), it means you are not properly aware of it.

Living in a world where people are not familiar with MC sometimes does feel as crazy as living in a world without Gricean maxims. To the point that communication feels impossible.

Steelmanning, avoiding weak men and being charitable somewhat compensates the absence of MC, but most of the time people don't apply those techniques to the fullest extent.


Part 3: MC and research

This part is about the way Motivated Cognition could help us in research. Even if it's wrong.

The best solution

Let's say you're solving a problem. It may be useful to imagine a perfect solution, a perfect outcome of solving the problem. Even if it's impossible.

However, to imagine the perfect solution to the problem A you may need to be aware of the perfect solutions to the problems B, C, D, E, F, G and H.

And if you never do MC in any way, shape or form, then you may be simply unable to do it.

Pervasive effects

I think that total neglect of Motivated Cognition has a very negative effect on society:

  1. People avoid MC. People forget what "desirability" means.
  2. Having forgotten about "desirability", people stop desiring to find original theories. People forget what "originality" means.
  3. Without MC people's empathy towards opinions of each other suffers. People become less aware of each other's ideas. I mean truly, deeply aware.
  4. The absence of "desirability" and "originality" metrics and "empathy" negatively affects people's general comprehension of theories and ideas.

So, the pipeline goes like this: "no desirability -> no originality and empathy -> bad comprehension".

Originality is really dead at this point: "original ideas" is not a concept that exists in our society. People learn a couple of original ideas in physics, math, philosophy and fiction... and then kind of forget that they can expect to find more original ideas? Or that it's a thing you can look for? Instead people try to estimate "truthfulness" directly and fail horribly.

Logical decision theory is arguably the most original idea in Rationality and one of the most original ideas in philosophy in general, but even LW-community doesn't notice it.

Studying perfection

There's one rule I think about: "if you don't assume that X is perfect, you don't study properties of X, you study something else" or "only perfect things can be studied".

If you don't assume that human reasoning is perfect, you don't study human reasoning, you study math or evolutionary psychology. Something else that you do consider to be "perfect".

If you don't assume that humans are perfect, you don't study humans, you study evolution on the planet Earth.

If you don't assume that our Universe is perfect, you don't study our Universe, you study Turing Machines. (e.g. Conway's Game of Life)

If you can't imagine that Motivated Cognition can be a perfect epistemology, then you don't really get curios about Motivated Cognition.

So, if you really want to study X, your reasoning may seem like MC to other people. See also what I wrote about Gricean maxims. For other people MC is insanity, but for me MC is about basic rules of dealing with information.


Part 4: Epistemologies are broken

You may warm up towards Motivated Cognition if you notice that the intuition for "logical reasoning" may be coming from a completely wrong and confused place. A very big part of my belief in MC is based on my disbelief in what people label as "logical reasoning".

Folk Epistemology

Imagine for a second that Bayesian Epistemology doesn't exist. Do you notice something strange in the world?

I do: people don't have any epistemology (null, zero). And yet every single person is dead sure that some mythical "logical reasoning" exists. Not a single person asks any questions.

  • People don't understand the difference between formal and informal logics.
  • People don't realize the difference between logic and epistemology.
  • People don't study argumentation even though they think it's a key skill. In particular, people don't study legal reasoning (argumentation on the level of the law).
  • People don't acknowledge that the field of "correct reasoning"/informal logic doesn't exist.
  • Even in philosophy people barely question logic. Not in a way that would actually affect the field. So, we do have logical pluralism and logical nihilism, but even the article about them is filled with unquestioned argumentation...
  • Nobody asks "Wait, when did I learn logic? What is it? How can I expect to be any good at it?"

Folk's belief in "logical reasoning" is far stranger than any religion. Religious people at least disclose the (supposed) sources of their beliefs and what it means to them. I'm deeply confused by folk's belief in logic, I can't come up even with an uncharitable explanation. You need zero self-awareness and zero debate experience in order to not question epistemology.

Is Bayesian Epistemology better?

I think Bayesianism is a brilliant formal epistemology. At least it actually exists.

However, in informal areas of human reasoning Bayesianism has a lot of problems. Rationalists may repeat and even "justify" folk's mistakes:

  • Instead of looking at the problems a bayesian may double down on the fact that "Bayesian axioms are inescapable, they are pure logic". Similar to how folks double down on "logic is logic, it can't be wrong" without noticing that logic isn't even an epistemology.
  • Just like formal logic, Bayesianism depends on the way you split reality into labels. Reference class problem.
  • Bayesianism doesn't model your reasoning and informal argumentation.
  • A straw bayesian may reason like this: "Yes, I guess I don't know how informal argumentation works. But who needs it when I have the correct epistemology? And anyway, if somebody forced me with a gun to make bets on arguments, I would make those bets. So, I guess I already know how everything works!"
  • Sequences (the foundational text) don't actually analyze argumentation.
  • Even when bayesians discuss saving the world, they argue like everyone does.
  • Folks and philosophers often question their knowledge and starting assumptions, but rarely question their inferences. But in informal reasoning the process of inference is more complicated than the rules of formal logic. So, paradoxically, actual argumentation never gets questioned. The most interesting parts of arguments never get questioned. And I'm afraid that rationalists inherit this blind spot. It seems rationalist rarely ask "Wait, can I really apply lessons from Bayes' theorem to informal reasoning?"

The main problem of epistemology

The main problem of informal epistemology is splitting the reality into labels. Without it you can't apply formal logic or Bayes' theorem. But there's no rules of attaching labels to real things.

Motivated Cognition is the only idea I know of that has at least a chance to solve the problem of labels. Because pairs like (label, real thing) have different convenience factors. Which means we can differentiate and choose such pairs.


Part 5: MC has roots in models of argumentation

If you try to model actual human argumentation, you may see how it leads to Motivated Cognition.

I discovered the models below because of MC, not the other way around. However, they retroactively made MC self-evident. Sorry that I don't go into details of each specific model right now (the post would be too big).

Model 0: arguments are relative

Imagine that arguments don't have any intrinsic power, even intrinsic meaning. It's not so hard to imagine. If this is true, you need some meta-thing to give arguments power and meaning.

Motivation can be this meta-thing.

Irony

The most ironic thing may be that "logical reasoning" itself is often best modeled using MC, not logic. Let's illustrate that and also give an example of an argument "without intrinsic meaning":

(a quote from "Trust in God, or, The Riddle of Kyon" by Eliezer Yudkowsky)

"But, the theologian shook his head sadly, and said that the atheist was naive about the emotional depth of the experience of 'faith', that it wasn't a concept invented by culture, but a feeling built into all human beings. In proof of this, the theologian offered the analogy of someone who's told that their lover has been unfaithful to them. If the evidence wasn't conclusive - and if you really loved that person - then you might think of everything they meant to you, and everything that you had done together, and go on putting trust in them. To trust someone because you love them more than anything - we would even call this believing in your lover. This, the theologian said, was the emotional experience at the root of faith, not just a trick of argument to win a debate. That's what an atheist wouldn't understand, because they were treating the whole thing as a logical question, and missing out on the emotional side of everything, like Spock. Someone who has faith is trusting God just like you would trust the one you loved most."

"I think he [the atheist] shook his head sadly, and commented on how wretched it was to invent an imaginary friend to have that relationship with, instead of a real human lover."

Maybe the atheist's argument makes a lot of sense. Maybe it doesn't make any sense. Why posit a (false) dilemma? It's probable that the atheist's argument is best modeled using MC, not logic:

"You should focus on your single most important wish. So, wish to love the very real people near you, not abstract otherworldly entities."

What do you do when MC emulates logical reasoning better than logic? It covers the hole in the argument which would take a lot of debates to cover. (Related: Warrant and Tortoise vs. Achilles.) And it would save a lot of time for the theologian, too.

A more radical claim: maybe "logical arguments" simply don't exist in informal argumentation, only MC-based ones do. Without MC you can't make sense of any informal debate.

Model 1: choosing concepts

Model 1. Any concept such as "intelligence" has an infinity of versions. In order to reason about the world you need to choose what versions are the most important. You choose based on your previous choices (interests).

This model easily leads to MC.

Model 2: looking for information

Model 2. Instead of theories about the world you can analyze "statements" about the world. A statement can contain a lot of true information even if it's false. Informativity of a statement doesn't determine its truth, but affects it. Informativity depends on context.

Context-dependent truth eventually leads to interest-dependent truth (when you choose which context is more important to you), so this model leads to MC too.

Bold conjecture

Maybe any high-level analysis of information leads to Motivated Cognition.

Because inevitably you need to introduce some additional parameter to "truth". Convenience, beauty, closeness to your interests, informativity. This additional parameter can be interpreted as MC.

Model 3: Bayesianism inside out

There's more: you can say that Motivated Cognition is "Bayesianism turned inside out".

Bayesianism assumes you can describe reality in terms of an infinity of micro atomic events with one parameter (probability).

I think you can describe reality in terms of a single macro fuzzy event with two parameters (probability and complexity). For example, this fuzzy event may sound like "everything is going to be alright" (optimism). You treat this macro event as a function and model all other events by iterating it. Iterations increase complexity and decrease probability. It's like doing Fourier analysis, modeling an infinity of functions with a single function (the sinusoid).

Logical arguments increase bias

Real-life arguments control how much evidence you need to start questioning something. If you think that arguments have intrinsic meaning, you may unconsciously increase your biases. Take a look at this dialog:

  • A: If we were consequentialists, we would do horrible things. Turn everyone into orgasming blobs.
  • B: No, you don't understand, a more complicated version of consequentialism fixes this.
  • A: I think there are still some problems...
  • B: Any concerns is, by definition, a consequence, so it gets factored in.

In this situation B needs N times more evidence to start questioning consequentialism compared to A. Or even an infinite amount of evidence. Because B probably treats his choice as "logic" and therefore puts little to no uncertainty in the choice. Yes, you can keep fixing consequentialism, but is it what you should've chosen in the first place?

Good or bad, MC gives you razor-sharp awareness of such situations. This topic is also one of the main topics in modern politics. (See "social privilege" and "social constructionism".)


Part 6: MC in basic beliefs

Even when I think about my most trivial opinions, I feel that Motivated Cognition is my core reason of (not) believing in something. For example, take a look at this opinion analyzed by Julia Galef:

How to spot a rationalization

A friend of mine had a conversation with a guy recently in which he said that he can't respect a girl if she goes home with him on the first date. My friend asked him why and he said "well, I mean it's so risky, didn't her parents ever teach her not to do something as stupid as going home with a guy she barely knows".

I wasn't there, but if I had been I would have asked him "Okay so you say the reason that you don't respect girls who hook up with you on the first date is that it's a risky thing for them to do, so let's say a friend of yours went out for a walk at night and in a dangerous part of town all by himself - that's risky - would you then lose respect for your friend?". And obviously I didn't ask him this because I wasn't there but I strongly suspect the answer is "no", you wouldn't lose respect for a friend who did that. You might say "dude, that was stupid, you shouldn't do that" but what this points to is the fact that his original reason that he gave for disrespecting girls who hook up with him early on was just a rationalization and that there's actually probably a lot of other things going on there about purity and attitudes about traditional sexual mores.

I would blame the guy for violating Motivated Cognition: judging someone needlessly. You shouldn't lose respect for anyone unless you're forced to (and even then there are ways to not lose 100% of respect).

I can't blame the guy for missing some obscure counterargument. Which doesn't even work unless you want it to work. I believe in situations when a single argument can tilt you towards an opinion, but I don't think this is such situation. If the guy answered "Yes", his opinion would still be nonsensical.

Argumentation often feels to me needlessly ad-hoc and constraining what we really want to say. The guy's opinion has bad vibes and we know where those vibes are coming from. Why are we trying to frame it as a miscalculation in some "logic game"?

Believing in Science

Even when I think about scientific theories (evolution, round Earth, General Relativity and etc.), "I believe because I want to believe" seems like the most natural explanation of my belief. MC encapsulates knowledge, emotions and attitude associated with the belief.

But why? Because I can't quantitatively estimate how much I trust in different types of evidence. But I know how much I want to trust Science. And I know how much I need to believe in order to get anywhere. So I can't fully buy the Bayesian explanation of my trust in Science.

Understanding consensus

MC helps me understand consensus about questions which I can't fully check myself. For example, there's a consensus in AI Alignment research that simple solutions to Alignment don't work. One of the bad solutions is just encoding a bunch of good values in AI.

Motivated cognition helps me to accept that, because MC is easy to align with the consensus:

  • It's not a solution I would want to work.
  • It doesn't solve the problem I would want to solve.
  • It's not the can of worms I want to think about. (Manually encoding values.)

MC also helps me to accept mathematical results, concepts and arguments, such as "actual infinity" and Cantor's diagonal argument. If I didn't know about MC, I could sympathize more with people who reject infinity. If you think that "logical reasoning" is all there is, then it's hard to go and make such metaphysical commitments.

Propaganda

People think propaganda proves that we need to lean on facts and logical reasoning.

I think propaganda proves the exact opposite: logical reasoning is useless for our species unless you force everyone into some "logical totalitarianism" with a single true source of facts and an obligatory "school of correct thought". If a person is affected by propaganda too much it's a symptom that the person suffers from a deeper thinking mistake. How can you become hateful by encountering a couple of wrong facts?

I think the absence of MC is the reason why some people feel "forced to become hateful by facts". When you forget what you "would want to be true", you forget both what's true and what you want.


Part 7: Emotional arguments

Here I just give two "emotional" arguments for Motivated Cognition. We're nearing the end of the post.

Childhood argument

When you first learn the concept of "logical reasoning" as a kid, it's nonsense. Because you can't possibly know enough to make sense of the concept.

And you can argue that the concept never starts making sense later.

When you're a kid, you use Motivated Cognition: you balance facts and desires. It makes sense because you have nothing else to do.

And you can argue that it never stops making sense.

Edge cases argument

In some edge cases I can physically feel how ethics and truth combine into a single concept, somewhat similar to the Chicken or the egg situation... in such cases I think:

Is this opinion too evil to be smart or too stupid to be ethical?

Such edge cases can give intuition for Motivated Cognition. Other edge cases (where it feels MC would 100% help) happen when misunderstanding between people is too great:

For example, imagine a person who thinks "humanity should become a mindless orgasming substance, because pleasure is good and a lot of pleasure is even better... you disagree because you haven't tried" and doesn't understand even a possibility of a problem with their opinion. I feel that Motivated Cognition could help to bridge misunderstanding in cases when nothing else can. "I want people to be something more complicated than orgasming blobs. I want to believe in fragile wishes that should be protected, because it's an interesting possibility. I want to believe that choice exists and it matters."


Part 8: evidence & priors for MC

The last two cruxes of my belief in MC:

The world is ridiculous

Something feels off to me about the world:

  • Our society devalues human experience (and, as a result, human life). In our society knowledge is power, and a hundred years of suffering is "worth" less than some mathematical theorem.
  • There's too little original ideas.
  • Ideas die off too soon. Ideas barely get developed.
  • Society is too fragmented.
  • Intelligence is too fragmented. E.g. mathematical knowledge doesn't give you nearly as much general intelligence as it could have, assuming math is the limit of abstract thinking. And if it's not the limit, then what is?
  • Rationality is less effective than it could be. People find success without rationality. A lot of people are not eager to become full-on rationalists.

LW-rationality implicitly assumes that what you see today is all you can get. The only thing which remains is to optimize the hell out of it and run towards Singularity.

But I think something fundamental is missing. I think we missed something fundamental about human intelligence. Or maybe I know it:

MC and perception

I experience different qualities of a person as parts of a single experience.

And I experience qualities that are usually thought to be universal (e.g. "kindness") as having different versions for different types of people. Like colors in a spectrum. Or vibes.

Now, what does it have to do with MC being true or false?

  • MC easily justifies such model of perception/personality. Because it's the best and the most interesting possibility.
  • In MC truth is not an infinite list of facts, but something monistic. Which resonates with a monistic model of perception.
  • In MC you "normalize" everything, analyzing ideas in their own context, not in a universal epistemology. And you get my experiences when you "normalize" your perception.
  • One model of MC talks about choosing versions of concepts. And in my perception I have "versions of experiences". I think that perception and argumentation are aspects of the same thought process.
  • The generalized version of MC leads to an even stronger analogy between perception and argumentation.
  • My experience of people completely contradicts the way we model intelligence and the way we "treat" people.

I wouldn't expect to have such experience in a world where MC is false.

And this is my most important experience. This experience "by itself" nudges me towards MC even more, which I can't explain without sharing the experience.

Note: probably I'm not updating purely on the possibility of my experience. If you want I can try to get into details.


Part 9: The summary

So, when I think about Motivated Cognition, I think that:

  • It's important to explore even if it's 100% wrong.
  • It's a good way to analyze ideas, opinions and even facts.
  • It's the limit of convergence of many philosophies and opinions. And a path to new ideas.
  • It's the only idea I know of that could solve the main problem of informal epistemology.
  • It's the only idea I know of that can model argumentation.
  • It explains my direct and most important experience.

MC is the only way to reach any important conclusion I know of.

And I don't know why it should be wrong. If it's wrong, we're probably dead.


Part 10: The Multiverse of Truth

So, I seriously think that properly knowing Motivated Cognition makes a person ~2 times smarter. 2 times more understanding and perceptive. And I mean any person, be it someone outside of rational community or a master of Bayesian reasoning. I believe it's true even if MC is wrong.

If you can be convinced of it, what can I do to convince you?

If you are already convinced, what can we do with this knowledge?

...

And if I'm right we can go further than x2, to x4. Because there's a technique that encapsulates and generalizes Motivated Cognition.

What do those numbers mean, x2 and x4? How can I be sure about them? Here's an analogy: imagine you are a very smart person raised outside of civilization. And one day you discover a computer with all kinds of modern programs. You become "at least 2 times smarter". And later you discover the Internet with all the knowledge of humanity. You become "at least 2 times smarter" again. It's not that your IQ doubles, it's that you discover a whole new world of knowledge which is inevitably going to change your general reasoning. You go from walking speed to riding a bicycle to riding a car. From 17th century to 21th century.

Beyond Motivated Cognition

Remember I said that informal analysis of information inevitably requires to introduce some additional parameter to "truth"? To go further than MC we need to realize: any such additional parameter is still truth itself.

  1. We need to stop thinking about "theories" about the world and start thinking about "statements" about the world.
  2. Each statement is metaphysical, it exists beyond reality and any specific epistemology.
  3. Each statement defines its own epistemology, its own notion of "truth" and exists in its own universe.
  4. Any random statement is a logical fact. You decide if this fact is true or false based on context. Alternatively: you choose the universe where this fact is true or false.
  5. Any two statements are equivalent in a certain epistemology/universe. (Any property of a statement is potentially equivalent to "truth".) Note that all previous points follow from this one, because properties of statements are statements too.

This is the most natural/rich view on truth, because it's more general than all other theories. Which are... parts of the truth, ironically. The Multiverse of Truth.

One unusual consequence of this view: "epistemologies" can be viewed as properties of statements. And "Motivated Cognition" can be interpreted as a property of your statements, not a choice you make. And it's a pretty simple/natural property, so it pops up everywhere. That's why your opinion can be described by MC even if you really used "logical reasoning" in order to come up with it. That's why even physical theories can be approximated by MC in worlds where MC doesn't work. That's why logic can be emulated by MC.

So, Motivated Cognition is just a tiny part of completely unexplored properties of truth, reasoning methods and ways to enumerate truths. Once you realized it you can crunch ideas x4 more effectively (compared to normal reasoning). Can we go for x8? I think we absolutely can, if we discover enough new properties of truth. But we all need to start working together.

Shredding ideas

To quickly show you the difference between MC and the generalized method, take a look at this statement: "human ethics are made up, but there are still goals and good/bad distinction if you pursue joy and self-mastery". Based on Friedrich Nietzsche's philosophy.

With normal reasoning I would react like this:

  • I disagree.

With MC I would react like this:

  • I guess it's optimistic for radical individualists. But it's pessimistic in the sense that it destroys a lot of interesting concepts.
  • If I'm not a radical individualist, those ideas are not very useful to me.

With the generalized method I would react like this:

  • "There are some truths which follow from the individual's happiness and self-improvement themselves. This is a logical fact in some epistemology."
  • This idea is definitely useful for me even if I'm not a radical individualist and even if I don't buy this idea too much. Actually, to deny this idea would be very pessimistic.
  • Now I see the connection between Buddhism and Nietzsche's philosophy, even though one is about loss of ego and the other defends egoism. Because Buddhism focuses on the individual themselves.

The generalized method allows to shred ideas intro smaller pieces and extract more true bits much faster. (My post about the generalized method.)

P.S.

Without MC I wouldn't be literate. Without predisposition to MC there wouldn't be any thoughts in my head in the first place.

I can't explain in mathematical terms why MC ends up being useful in this world, but I swear on my life that it is useful. I tried to show it from the first principles as much as I could, even though my first principles are not formal. I believe MC is one of the greatest ideas you can ever learn.

New Comment
16 comments, sorted by Click to highlight new comments since:

Imagine the best possibility (for humans) consistent with today's physics. Imagine the best (for humans) mathematical facts.

No you don't. Penroses theory is totally abstract computability theory. If it were true, then so what? The best for humans facts are something like "alignment is easy, FAI built next week". This only works if penrose somehow got a total bee in his bonnet about uncomputability, it greatly offended his sensibilities that humans couldn't know everything. Even though we empirically don't. Even though pragmatic psycological bounds are a much tighter constraint than computability. In short, your theory of "motivated cognition" doesn't help predict much. Because you need to assume penroses motivations are just as wacky.

Also, you seem to have slid from "motivated cognition works to produce true beliefs/optimize the world" to the much weaker claim of "some people use motivated cognition, you need to understand it to predict there behavior". This is a big jump, and feels mote and bailey. 

[-][anonymous]21

Late reply, sorry if it's a bother.

No you don't. Penroses theory is totally abstract computability theory. If it were true, then so what?

I have to say, I can definitely see it, and I even think it's obvious why. Penrose's theory assures that consciousness is the special phenomenon we feel it is, it assures the unity of the mind through a special unity of consciousness, it assures human exceptionality over any automata (we would be, in a relevant sense, NOT automata), and it rescues the idea of (libertarian) 'free will' in the most satisfactory way possible given the basic commitments coming from physics (and this distinguishes us from mere programmed automata).

I always thought that I would prefer his theory to be true, I dont feel bad about it but it's true.

It is pretty obvious that Penrose has pre-commitments to human exceptionality and the specialness of consciousness (and he has shown them pretty explicitly, by appealing to incompleteness theorems against AI and saying that "it can't just be natural selection"), and that's what motivates the theory. It's not a coincidence.

Suppose human brains turned out to be hypercomputers. (They really really aren't) But imagine the world where they were.

We describe what is going on with equations. It's some complicated thing involving counterpolarized pseudoquarks or something. We can recreate this phenomena in the lab. Recent experiments with sheets of graphine in liquid xenon produce better hypercomputers. 

A halting oracle doesn't seem to contain magic "free will" stuff. It's behaviour is formally specified and mathematically determined. A quantum coin toss can already make random data, something no turing machine can do. (Yet the universe is computable, because a reality goes both ways) Is a quantum coin conscious? No, it's just a source of randomness. 

As soon as the magic mysterious essence of consciousness that Penrose hopes for is actually discovered, it will stop being magical and mysterious, and will be demoted to the dull catalogue of common things. 

[-][anonymous]40

I know all of that, believe me.

 

A halting oracle doesn't seem to contain magic "free will" stuff

 

I know, although that's really what libertarian FW entails in physicalist terms. In fact, funnily enough, I read yesterday Turing's paper Computer Machinery and Intelligence (mostly just to be amused at his theological and ESP counter-arguments) and he literally said that some people describe machines with randomness oracles as having 'free will'.

 

Regardless, a major point of Penrose's theory is that the mind works as a cohesive whole thanks to a quantum superposition of the brain, it's a much more remarkable feature than just having any form of LFW. I don't even get the appealing of LFW anymore.

[-][anonymous]10

Well, there's one thing that I miss of the idea of having LFW (being an indeterministic system). Whenever I think about alternate history scenarios, no matter how plausible they look, the actual probabilities of having happened are 0, and that's just boring.

Also, you seem to have slid from "motivated cognition works to produce true beliefs/optimize the world" to the much weaker claim of "some people use motivated cognition, you need to understand it to predict there behavior". This is a big jump, and feels mote and bailey.

Most parts of the post are explicitly described as "this is how motivated cognition helps us, even if it's wrong". Stronger claims return later. And your weaker claim (about predicting people) is still strong and interesting enough.

No you don't. Penroses theory is totally abstract computability theory. If it were true, then so what? The best for humans facts are something like "alignment is easy, FAI built next week". This only works if penrose somehow got a total bee in his bonnet about uncomputability, it greatly offended his sensibilities that humans couldn't know everything. Even though we empirically don't. Even though pragmatic psycological bounds are a much tighter constraint than computability. In short, your theory of "motivated cognition" doesn't help predict much. Because you need to assume penroses motivations are just as wacky.

There I talk about the most interesting possibility in the context of physics and math, not Alingment. And I don't fully endorse Penrose's "motivation", even without Alingment his theory is not the most interesting/important thing to me. I treat Penrose's theory as a local maximum of optimism, not the global maximum. You're right. But this still helps to remember/highlight his opinions.

I'm not sure FAI is the global maximum of optimism too:

  • There may be things that are metaphysically more important. (Something about human intelligence and personality.)
  • We have to take facts into account too. And facts tell that MC doesn't help to avoid death and suffering by default. Maybe it could help if it were more widespread.

Those two factors make me think FAI wouldn't be guaranteed if we suddenly learned that "motivated cognition works (for the most part)".

Finally got around to reading this. My general reaction: I can interpret you as saying reasonable, interesting, and important things here, but your presentation is quite sloppy and make it hard to be convinced by your lines of reasoning since I'm often not sure you know your saying what I interpret you to be saying.

Personally, I'd like to see you were a part that covers less ground and makes smaller, more specific claims and gives a more detailed about of them. For example, I left this post still unsure of what you think "motivated cognition" is. I think there's a more interesting discussion to be had, at least initially, by first addressing smaller, more targeted claims and definitions rather than exploring the consequences immediately, since right now there's not enough specificity to really agree or disagree with you without interpolating a lot of details.

Do you think an analysis of more specific arguments, opinions and ideas from the perspective of "motivated cognition" would help? For example, I could try analyzing the most avid critics of LW (SneerClub) through the lens of motivated cognition. Or the argumentation in the Sequences.

My general reaction: I can interpret you as saying reasonable, interesting, and important things here, but your presentation is quite sloppy and make it hard to be convinced by your lines of reasoning since I'm often not sure you know your saying what I interpret you to be saying. (...) right now there's not enough specificity to really agree or disagree with you without interpolating a lot of details.

It may be useful to consider another frame except "agree/disagree": feeling/not feeling motivated to analyze MC in such depth and in such contexts. Like "I see this fellow (Q Home) analyzes MC in such and such contexts. Would I analyze MC in such context, in such depth? If not, why would I stop my analysis at some point?". And if the post inspired any important thoughts, feel free to write about them, even if it turns out that I didn't mean them.

I finished this with a large dose of confusion about what you're trying to say, but with the vague feeling that it's sort of waving in an interesting direction. Sort of like when people are trying to describe Zen. It could be that you're trying to describe an anti-meme. Or that there are too many layers of inference between us. 

My current understanding is that you're trying to say something like "optimism and imagination are good because they help you push through in interesting directions, rather than just stopping at the cold wall of logic". I think I get what the examples are showing, I mainly just don't understand what you mean by MC.

Same. I think my biggest complaint with this post is that motivated cognition is not sufficiently unpacked such that I can see the gears of what you mean by this term.

I notice that my explanation of MC failed somewhere (you're the second person to tell me). So, could you expand on that?

Or that there are too many layers of inference between us.

Maybe we just have different interests or "commitments" to those interests. For example:

  • I'm "a priori" interested in anything that combines motivations and facts. (Explained here why.)
  • I'm interested in high-level argumentation. I notice that Bayesianism doesn't model it much (or any high-level reasoning).
  • Bayesianism often criticizes MC and if MC were true it would be a big hit to Bayesianism. So, the topic of MC is naturally interesting.

If you're "committed" to those interests, you don't react like "I haven't understood this post about MC", you react more like "I have my own thoughts about MC. This post differs from them. Why?" or "I tried to think about MC myself, but I hit the wall. This post claims to make progress. I don't understand - how?" - i.e. because of the commitment you already have thoughts about the topic or interpret the topic through the lens "I should've thought about this myself".

My current understanding is that you're trying to say something like "optimism and imagination are good because they help you push through in interesting directions, rather than just stopping at the cold wall of logic".

Yes, this is one of the usages of optimism (for imagination). But we need commitments to some "philosophical" or conflicting topics to make this interesting. If you're not "a priori" interested in the topic of optimism, then you can just say "optimism for imagination? sure, but I can use anything else for imagination too". Any idea requires an anchor to an already existing topic or a conflict in order to be interesting. Without such anchors a summary of any idea is going to sound empty. (On the meta level, this is one of my arguments for MC: it allows you to perceive information with more conflict, with more anchors.)

Also, maybe your description excludes the possibility that MC could actually work for predictions. I think it's important to not exclude this possibility (even if we think it's false) in order to study MC in the most principled way.

My understanding of MC is "I want this to be true, so I'll believe it is and act accordingly". There are many schools of thought that go this route (witchcraft, law of attraction etc.) This has obvious failure modes and tends to have suboptimal results, if your goal is to actually impose your will upon the world. Which is why I'm initially skeptical to MC.

You seem to have a more idiosyncratic usage of MC, where you've put a lot of thought into it and have gone into regions which are unknown to the common man. Sort of like you're waving from the other side of a canyon saying how nice it is there, but all I can see is the chasm under my feet. There is a similar problem with "Rationality" meaning different things to different people at different times in history, which can result in very frustrating conversations.

Could you try tabooing MC?

Classic understanding of MC: "motivation = truth". My understanding of MC: "motivations + facts = truth". (See.)

Yes, my understanding of MC is "unusual". But I think it's fair:

  • Classic understanding always was flawed. Because it always covered just a small part of MC. (See.) You don't need my ideas to see it.
  • You can say my understanding is natural: it's what you get when you treat MC seriously or try to steelman it. And if we never even tried to steelman MC - that's on us.
  • People get criticized for all kinds of MC or "MC-looking" styles of thinking. E.g. for politics. Not only for witchcraft. My definition may be unusual, but it describes an already existing phenomenon.

So it's not that I cooked up a thing which never existed before in any way and slapped a familiar name on it.

Could you try tabooing MC?

Some ideas from the post: (most of them from here)

  • MC is a way to fill the gaps of informal arguments.
  • MC is a way to choose definitions of concepts. Decide what definitions are more important.
  • MC is a way to tie abstract labels to real things. Which is needed in order to apply formal logic or calculate probabilities. (See.)
  • MC is a way to "turn Bayesianism inside-out". (I don't know math, so I can't check it precisely.) You get MC if you try to model reality as a single fuzzy event. (See.) This fuzzy event becomes your "motivation" and you update it and its usage based on facts.
  • MC is a way to add additional parameters to "truth" and "simplicity" (when you can't estimate those directly).

We can explore those. I can analyze an argument in some of those terms (definitions, labels vs. real things and etc.).

[-][anonymous]30

Usually when you notice that when a conversation turns into a power struggle instead of actually addressing the topic (both can happen at the same time), it mostly becomes a waste of time. The time you choose to spend on one thing is time lost to be spent on other things, just like money. Maybe the power struggle part can be simply ignored while trying to be as productive as you can. If all participants are aware of this pattern, then usually the power struggle doesn't happen at all. Generally it's when a conversation is guided through emotions, that's when it deteriorates.

The most common problem has to do with people's misconception with what they think is productive themselves ending up being misaligned with other participants' concept of productivity.

[-]TAG10

Are you sure the thing you are talking about is actually motivated cognition, not plain old instrumental rationality?

How does instrumental rationality model informal argumentation/informal reasoning?

In the very general sense, anything is instrumental rationality to me if I believe that it works.