Related to: Joy in the Merely Real, How An Algorithm Feels From Inside, "Science" As Curiosity-Stopper

Your friend tells you that a certain rock formation on Mars looks a lot like a pyramid, and that maybe it was built by aliens in the distant past. You scoff, and respond that a lot of geological processes can produce regular-looking rocks, and in all the other cases like this closer investigation has revealed the rocks to be completely natural. You think this whole conversation is silly and don't want to waste your time on such nonsense. Your friend scoffs and asks:

"Where's your sense of mystery?"


You respond, as you have been taught to do, that your sense of mystery is exactly where it should be, among all of the real non-flimflam mysteries of science. How exactly does photosynthesis happen, what is the relationship between gravity and quantum theory, what is the source of the perturbations in Neptune's orbit? These are the real mysteries, not some bunkum about aliens. And if we cannot learn to take joy in the merely real, our life will be empty indeed.

But do you really believe it?

I loved the Joy in the Merely Real sequence. But it spoke to me because it's one of the things I have the most trouble with. I am the kind of person who would have much more fun reading about the Martian pyramid than about photosynthesis.

And the one shortcoming of Joy in the Merely Real was that it was entirely normative, and not descriptive. It tells me I should reserve my sense of mystery for real science, but doesn't explain why it's so hard to do so, or why most people never even try.

So what is this sense of mystery thing anyway?

I think the sense of mystery (sense of wonder, curiosity, call it what you want) is how the mind's algorithm for determining what problems to work on feels from the inside. Compare this to lust, how the mind's algorithm for determining what potential mates to pursue feels from the inside. In both cases, the mind makes a decision based on criteria of its own, which is then presented to the consciousness in the form of an emotion. And in both cases, the mind's decision is very often contrary to our best interest - as anyone who's ever fallen for a woman based entirely on her looks can tell you.

What sort of stuff makes us curious? I don't have anything better than introspection to go on, but here are some thoughts:

1. We feel more curious about things that could potentially alter many different beliefs.
2. We feel more curious about things that we feel like we can solve.
3. We feel more curious about things that might give us knowledge other people want but don't have.
4. We feel more curious about things that use the native architecture; that is, the sorts of human-level events and personal interactions our minds evolved to deal with.

So let's go back and consider how the original example - a pyramid on Mars versus photosynthesis - fits each of these criteria:

The pyramid on Mars could alter our worldview completely1. We'd have to rework all of our theories about ancient history, astronomy, the origin of civilization, maybe even religion. Learning exactly how photosynthesis works, on the other hand, probably won't make too big a difference. I assume it probably involves some sort of chemistry that sounds a lot like the other chemistry I know. I anticipate that learning more about photosynthesis wouldn't alter any of my beliefs except those directly involving photosynthesis and maybe some obscure biochemical reactions.

Pseudoscience and pseudohistory feel solveable. When you're reading a good pseudoscience book, it feels like you have all the clues and you just have to put them together. If you don't believe me, Google some pseudoscience. You'll find hundreds of webpages by people who think they've discovered the 'secret'. One person who says the pyramid on Mars was made by Atlanteans, another who says it was made by the Babylonian gods, another who says it was made by God to test our faith. On the other hand, I know I can't figure out photosynthesis without already being an expert in chemistry and biology. There's not that tantalizing sense of "I could be the one to figure this out!"

Knowing about a pyramid on Mars means you know more than other people. Most of humankind doesn't think there are any structures on Mars - the fools! And if you were to figure it out, you'd be...one of the greatest scientists ever. The one who proved the existence of intelligent life on other planets. It'd be great! In comparison, knowing about photosynthesis makes you one of a few thousand boring chemist types who also know about photosynthesis. Even if you're the first person to discover something new about it, the only people likely to care are...a few thousand boring chemist types.

And the pyramid deals in human-level problems: civilizations, monuments, collapse. Photosynthesis is a matter of equations and chemical reactions; much harder for most people.

Evolutionarily, all these criteria make sense. Of course you should spend more time on a problem if you're likely to solve it and the solution will be very important. And when you're a hunter-gatherer, all your problems are going to be on the human level, so you might as well direct your sense of mystery there. But the algorithm is unsuited to modern day science, when interesting discoveries are usually several inferential distances away in highly specialized domains and don't directly relate to the human level at all.

Again, compare this to lust. In the evolutionary era, mating with a woman with wide hips was quite adaptive for a male. Nowadays, with the advent of the Caesarian section, not so much. Nowadays it's probably most important for him to choose a mate whom he can tolerate for more than a few years so he doesn't end up divorced. But the mental algorithms whose result outputs as lust don't know that, so they end up making him weak-kneed for some wide-hipped woman with a terrible personality. This isn't something to feel guilty about. It's just something he needs to be wary of and devote some of his willpower resources toward fighting.

The practical take home advice, for me at least, is to treat curiosity in the same way. For a while, I felt genuinely guilty about my attraction to pseudohistory, as if it was some kind of moral flaw. It's not, no more than feeling lust towards someone you don't like is a moral flaw. They're both just misplaced drives, and all you can do is ignore, sublimate, or redirect them2.

The great thing about lust is that satisfying your unconscious and conscious feelings don't have to be mutually exclusive. Sometimes somebody comes around who's both beautiful and the sort of person you want to spend the rest of your life with. Problem solved. Other times, once your conscious mind commits to someone, your unconscious mind eventually starts coming around. These are the only two solutions I've found for the curiosity problem too.

The other practical take home advice here is for anyone whose job is educating others about science. Their job is going to be a lot easier if they can take advantage of this sense of mystery. The best science teachers I know do this. They emphasize the places where science produces counterintuitive, worldview-changing results. They present their information in the form of puzzles just difficult enough for their students to solve with a bit of effort. They try to pique their students interest with tales of the unusual or impressive. And they try to use metaphors to use the native architecture of human minds: talking about search algorithms in terms of water flowing downhill, for example.

I hope that any work that gets done on Less Wrong involving synchronizing conscious and unconscious feelings and fighting akrasia can be applied to this issue too.

 

Footnotes:

1: The brain seems generally bad at dealing with tiny probabilities of huge payoffs. It may be that the payoff measured in size of paradigm shift from any paranormal belief being true is just so high that people aren't very good at discounting for the very small percent chance of it being true.

2: One big question I'm still uncertain about: why do some people, despite it all, find science really interesting? How come this is sometimes true of one science and not others? I have a friend who loves physics and desperately wants to solve its open questions, but whose eyes glaze over every time she hears about biology - what's up with that?

New Comment
55 comments, sorted by Click to highlight new comments since:

The martian pyramid theory doesn't hold any attraction for me. Maybe it's because I'm supremely rational. But maybe it's because, if someone really discovered a pyramid built on Mars, it would be extremely... irritating.

I've got decades of hard work invested in learning this whole big scheme of how the world works. The Martian pyramid would knock a lot of it down. And then I'd hardly be any better than anybody else.

That's an very good point, especially because most smart people who really understand science share your opinions

I like your last point - that you feel better than everyone else now, because you do get real science, and so you'd hate to have everything change around. Maybe the people who do like.the idea of a Martian pyramid are those who currently feel worse than everyone else, but who would suddenly become better than everyone else if the pyramid were proven real, because they're the only ones who have studied it.

That would also explain why I find some pseudohistory interesting but most pseudoscience just plain annoying.

Possible experiment to test this: give two randomly assigned groups a test on biology and medicine. Give one group a really easy test and tell them they're within the top 10% regarding medical knowledge; give other people a really hard test and tell them they're within the bottom 10% of medical knowledge. Then give both of them an article on some ancient natural alternative medicine treatment and see which group is more positive towards it. I predict the people convinced they know nothing about medicine will support the alternative treatment more, regardless of how much they actually know.

I like your last point - that you feel better than everyone else now, because you do get real science, and so you'd hate to have everything change around.

That was my first thought, because you know it's very sexy on LW and OB to attribute your thinking to status signalling.

But I don't think that's it. I'm going to reclaim the rational high ground. I've seen lots of examples of the kinds of theories of the world that lead to pyramids on Mars, lost civilizations in the Atlantic, hollow Earth, auras, divination, etc.

They're ugly theories. The pyramid on Mars is not enticing, because it would lend support to ugly theories and pull the rug out from under beautiful theories.

(If the pyramids on Mars were built by an ancient Martian civilization, then, fine. But if they were built by spacefaring aliens who visited the Egyptians - or, worse yet, by spacefaring Egyptians - not so fine. A human face on Mars would be even worse.)

(Gene transfer by bacterial conjugation is a little bit ugly, because it makes it a lot harder to predict things from evolutionary theory, and to make all sorts of inferences. I was going to give that as an example, but realized it isn't the same thing at all. It makes the empirical realization of your theory messier, but it doesn't force you to adopt a different, uglier theory.)

Gene transfer also resolves some very puzzling and ugly irregularities. Sometimes the beauty isn't just the theory, but it's relationship to data. If a theory's very elegant, but the data too messy, it disturbs my sense of completion.

I disagree strongly with you about who is better, if there are artificial pyramids on Mars.

If you have received sufficient evidence to be pretty certain, you are acting rational on rejecting the notion of pyramids on Mars up to the point (and only up to there) you receive more convincing evidence to the fact that there are pyramids on Mars. In that case you should a) gather more evidence and decide which point is correct b) switch pretty immediately in case there really are pyramids.

In particular I claim you have basically zero evidence against martian pyramids except the general heuristic of occam's razor.

Also, abstaining from making public your (uninformed) opinion on martian pyramids would reduce you credibility loss in case there are any.

Finally, science will not just turn to "wrong" just because there are martian pyramids, most of it still stands as it is.

I was probably reading too much into the example, assuming that "pyramids on mars" was supposed to stand for more anti-scientific things like human faces on mars, hieroglyphic inscriptions on Mars, etc.

Pyramids or canals on Mars would be OK, as long as they're built by Martians. That would even be exciting. My sense is that the Weekly World News wouldn't run a story on anything on Mars unless it connected with ancient Earth civilizations. Or Batboy.

BTW, there are some natural pyramids on Earth. Very small ones, inside caves, as crystals.

You may find some statistical effect, but why not give them all the same test, and rank them on their actual knowledge of biology and medicine?

I think the concept of feeling better or worse than the majority is important to the experiment. Being told that you suck is more convincing after a hard test regardless of how much you actually know.

Seems too potentially confounding. How would you distinguish between:

a) A person who knows a lot about biology wants to promote scientific values in order to become a more valuable person.

and b) A person who knows a lot about biology is smart enough not to believe pseudoscience involving biology?

[-]pjeby100

But the mental algorithms whose result outputs as lust don't know that, so they end up making you weak-kneed for some big-breasted wide-hipped woman with a terrible personality.

You know, some of us are "face men" -- men who are turned on (or off) by faces. I literally can't tell if a woman's attractive to me or not unless I can see her face, which makes many popular jokes (e.g. "putting a bag over her head") and supposedly "hot" women incomprehensible to me. Women with mean or vapid faces turn me off no matter what their bodies look like.

(I actually used to be really confused by the notions of "leg man", "breast man", etc. before I found out this other category existed; I couldn't figure out what category I was in. Wish I could remember where I heard about it, though.)

(I actually used to be really confused by the notions of "leg man", "breast man", etc.

I find these categories make me hungry for fried chicken.

The categories can get stranger than that, even. Personally, I dig long hair. Some people like short people. Some people like tan skin. Some people like personalities and are genuinely attracted to successful people. (Like Rockstars.)

Weird, eh?

This applies to curiosity as well. I am curious about the answers to puzzles. I am curious about things that other people may have wrong. I am curious about why I do certain things. Not everyone has this same list of curiosity (which is pretty similar to Yvain's list). Some people are curious about trivia. Some are curious about what their significant other is doing. Some people are curious about how well their fantasy league is doing. Some people are curious if they have more email.

I do not know that all of those examples fit having a sense of mystery but I do not think they can be meaningfully abstracted into a list such as:

  1. Learning new things
  2. Learning obscure things
  3. Learning hard things
  4. Learning useful things

It seems that it would describe the list well enough, but it seems more of a description than a definition. To use the Mars pyramid as an example: I am also curious about the Mars pyramid. But my curiosity has nothing to do with aliens. I just think it is interesting. Of course, I think the pyramid shaped rock that I just ran over with the lawnmower was interesting, too. I am curious about both simply because they are there. My sense of mystery is fully activated and I want to learn.

To break apart the description further, another example that may (or may not) stretch the definition of curious is this: I like to sit in the shower and feel the water on my skin. I am curious about it, even though there is no question being answered and no information being learned. My sense of mystery is present, but there is nothing new or useful in the process. It is purely aesthetic. Is this curiosity? I claim it fits.

On the other hand, obsessing about a rock on a planet because you have an answer in your head that you have been trying to find evidence for is not curiosity. It is not asking questions about things. It is trying to find questions to match the answer you already know. I would argue there is no sense of mystery there at all.

A couple things, both somewhat peripheral to the main point:

1) Is there actually some adaptive benefit to large breasts? I seem to remember reading somewhere that there isn't and they're only common in humans because guys select for them.

2) This is an awfully straight-male-centric post. I realize that this is a community which probably consists mostly of straight (or at least bi) males, but I would have appreciated some explicit parenthetical remark or footnote noting that assumption.

With as much awareness as a footnote or parenthetical would take, the post could have been edited from

as anyone who's ever fallen for a woman based entirely on her looks can tell you.

to

as anyone who's ever fallen for someone based entirely on looks can tell you.

without changing the intended meaning one iota, and easily making the entire post more friendly to many who might have felt slighted (or just left out) by the original.

1) I've heard some people say that it involves ability to produce milk for the baby, and others say that it's a signal of health (ie evolution wouldn't concentrate resources there unless all the other organs were already healthy). I only mentioned the first possibility in this post, which on second thought was overly simplistic.

2) You mean in relation to the evo psych of lust? Yes, I suppose that's true; as I said, a lot of this post is based on introspection, and when I introspect about the sex drive I tend to think about the straight male sex drive for obvious reasons. I don't really know what kind of footnote I could include besides "And females also exist and also fall in love for stupid reasons sometimes", but if there's something specific which you think needs to be said I'll add it in.

(AFAIK, there's no good research or theorizing on the evolutionary psychology of how homosexuals select partners, and the evolution of homosexuality is still a confusing and controversial field. Would be interested if you know of anything.)

1) Women with small breasts have no problem breastfeeding, although they do have to do it more frequently - it's possible that in some environments infrequent nursing was an advantage, although I'm skeptical that this alone could be responsible for the male fixation on breasts over other indicators.

2) A footnote or parenthetical saying something like "Note: The word "you" actually refers to you only if you are a straight or bisexual male" would be fine.

I have no special knowledge on how same-sex partners are generally selected, just my data point from being bi myself.

Gah. I understand now. Second-person writing strikes again. Edited to third person, and all references to breasts removed and replaced with references to wide hips, which as far as I know everyone agrees have a clear evolutionary benefit.

The appearance of wide hips also signals hip fat deposits, which help with offspring development. Not everyone has a C-section - and the operation has risks. Also, with sexually-selected traits, there is little point in bucking the trend - if you choose to mate with narrow-hipped girls, they will tend to produce narrow-hipped offspring which then no-one else finds attractive. My assessment would be that apparent hip width is still quite a useful heuristic.

1) I've heard some people say that it involves ability to produce milk for the baby, and others say that it's a signal of health (ie evolution wouldn't concentrate resources there unless all the other organs were already healthy). I only mentioned the first possibility in this post, which on second thought was overly simplistic.

Here's a somewhat more plausible explanation for how the breast-selection cycle got started in humans.

Edit: fixed link

Also somewhat off-point but I'll post this anyways:

I'll agree that this post is "male-centric" in the sense that Alicorn is talking about. But I don't think it's fatally so, as any smart woman (or homosexual for that matter) can of course recognize that the mate-choosing parallel is arbitrarily chosen, and can make the necessary changes to create a parallel example in mate-choosing that's more relevant to his/her own life.

However, as a heterosexual man myself, I'll respond directly to the example.

I must admit that I take a lot of pleasure in choosing women for "the wrong reasons" aka appearance (especially wide hips!). I'm wise enough now in my dotage to know that I'm getting myself into a world of trouble. But I tend to enjoy that trouble. It's part of the peculiar spice of life as far as I am concerned. I'd rather have knock-down drag-out shouting matches and a relationship doomed to failure with a crazy, beautiful woman, than be with a "perfect partner" that inspires less viscerality in me. And even if offered the lucky match of sexy AND smart/wise/kind, I might even go so far as to say I'd prefer beautiful and difficult to beautiful and well-matched.... maybe. It does lend a fun tang to life.

But perhaps this only reinforces Yvain's point. Those are my particular criteria, and I am best served as a rational person to seek out women with those criteria, knowing they are what makes me most happy.

What kind of (pseudo)scientist would I be then, to carry out this parallel? The kind that climbs Martian pyramids just because they are there, knowing full well it's exceedingly unlikely they were made by aliens? Hmmm....

\5. We feel more curious about things we feel other people are trying to keep from us.

Which suggests pseudoscience and conspiracy theories may share causal roots.

EDIT: Sorry about the backspace, but when I tried writing 5 without it, the stupid software kept changing it to a 1 when I posted.

If you put the backslash between the number and period, it will not produce a list.

5. testing

I got that from this markdown page. A page like that should be added to the help.

Funny, the most attractive pseudohistories to intellectuals, such as "The Origin of Consciousness in the Breakdown of the Bicameral Mind" and it's novelization "Snow Crash" and the most attractive to non-intellectuals, such as "Holy Grail, Holy Blood" and it's novelization "The DaVinci Code" don't have terribly large implications. (well, the real history part, about Christianity having a history, does, but not the rest.)

Aliens are generally very boring IMHO, as I don't even feel that they make predictions.

2: One big question I'm still uncertain about: why do some people, despite it all, find science really interesting? How come this is sometimes true of one science and not others? I have a friend who loves physics and desperately wants to solve its open questions, but whose eyes glaze over every time she hears about biology - what's up with that?

I am one of those people who find science really interesting, although this is not limited to one or two disciplines, so I can't shed any light for you there.

I find science to be interesting because I think that it is a process that discovers truth, meaning correct correspondence to reality. Understanding the objective world better allows us to interact with it more efficiently - look at what engineers have accomplished. Truth allows us to win.

Edit: Let me clarify: I find the results of science fascinating and revealing, but I think the actual work you have to do to further science can be quite tedious. I want to glean the insights, but I know I can direct my efforts elsewhere more effectively.

I find science to be interesting because I think that it is a process that discovers truth, meaning correct correspondence to reality.

I agree: this is the foundation of the motivation to study and learn, and the source of mystery. All scientists, mathematicians, philosophers have this same goal, they just differ in which topics and questions they think are (a) most important and (b) most effective for updating the map.

(a) Roughly, a biologist thinks "life" is the thing worth understanding, a physicist thinks that the basic laws of the universe are the keys to the city of knowledge, etc.

(b) You also have to factor in what success there can be had in a particular subject in your lifetime. I would have been more interested in psychology, I had a budding interest in high school, but I was uncomfortable with the shades of gray. I preferred the rigor of mathematics -- even if the conclusions were just about numbers rather than something that really matters like people.

Regarding the tedium of science: while mystery "should" be the main motivation in studying the world and hopefully is at least responsible for choosing which field you go into, there are of course other things that motivate on a day-to-day basis: ambition, competition, the joy of mastering a subject.

All scientists, mathematicians, philosophers have this same goal, they just differ in which topics and questions they think are (a) most important and (b) most effective for updating the map.

I think comparative advantage plays a role in this. If you happen to be good at numbers for whatever reason, you go into a more quantitative field, and that is how you can best contribute to the expanding frontier of knowledge.

I would have been more interested in psychology, I had a budding interest in high school, but I was uncomfortable with the shades of gray. I preferred the rigor of mathematics -- even if the conclusions were just about numbers rather than something that really matters like people.

I was very mathematically inclined from a young age, but after learning calculus I began to look elsewhere to further my knowledge. I turned towards the social sciences, which led me down a less quantitative path. I think the difference is between the study of physical systems of relatively few variables, versus complex biological systems with many interactions between many individual components. That simple mathematical precision is infeasible when dealing with complexity, it requires a different set of analytical skills and tools, which I found myself much more inclined towards.

And this completes another synthesis:

  • Motivation, curiosity, joy as experience of the attention-allocation algorithm, that is largely separate from
  • Goodness, sympathy, fairness, righteousness, rightness, shouldness, that come as experience of the preference algorithm, that computes which action one is to choose, which policy to encourage.

The first is influenced by the second, but isn't identified with it. And thus, one should learn to process the two without fear of mutual contamination, gaining proficiency in both modes of valuation to improve the efficiency of their interaction.

While the preference intrinsically wants to be self-reflective, to become the eternal objective goal, the motivation is in the moment, malleable, responsive to what's currently relevant, directed at what's currently possible. Where you don't want to start preferring the thing Y just because it's easy to do, even if it's a worse option than X, you still might want to motivate yourself to work on Y.

There are three phenomena at play: preference, motivation, and action.

Most of the time, preference directly influences motivation. Motivation naturally drives action. Where preference fights motivation over the control of action, we say that "willpower" is used. Where motivation habitually wins over preferences, we call it akrasia, in particular procrastination. Motivation completely taking over preference seems to correspond to the Affective death spirals.

A more efficient course of action compared to the use of willpower seems to be to work directly of motivation.

Another weak hypothesis: maybe the discount rates are a result of confusing motivation with preference, with motivation having an upper hand.

Most of the time, preference directly influences motivation. Motivation naturally drives action. Where preference fights motivation over the control of action, we say that "willpower" is used. Where motivation habitually wins over preferences, we call it akrasia, in particular procrastination. Motivation completely taking over preference seems to correspond to the Affective death spirals.

I agree with most of your comment, but I have one small nit to pick with this. I don't think preference directly influences motivation, so much as preference just represents the desires of our idealized self-image. That is, it's what we want to be motivated by.

Some motivations we're fine with identifying as part of our self-image, so they are also our preferences. But this is not a case of preference influencing motivation; in fact, it's the other way around!

In an area where we don't have any existing motivation, a preference can be used to help us build one. But if there's a conflicting motivation already in place, then it can be tough.

In general, I say that akrasia is really anosognosia of the will: i.e. believing that your preference should motivate you, and creating elaborate explanations for why it doesn't, when in fact it's barely related. Motivations exist in a different schema, and have to be created, modified, or deleted independently of our preferences.

(Anyway, an excellent comment overall; I just think this one small nit is pretty important to point out as well.)

(I'm going to break protocol in order to place a data point.)

PJEby, I can't improve my understanding of things by reading your writings. You assert much without connecting with other people's constructions, and performing translation is difficult. This creates additional annoyance for me, because it's difficult to respond either, more so to dispute. And the more you write, the harder it becomes.

I can't give feedback through rating either, because I don't see your writing rewarding enough to try to understand what you say in your own language, and not understanding what you wrote makes it hard to decide whether a particular comment of yours was valuable.

[-]Cyan50

I offer another data point. Right now I'm struggling mightily with procrastination, and pjeby's post of 26 April 2009 04:38:27PM is perfectly comprehensible to me because it closely mirrors my internal experience of the difference between preference and motivation. I've also had the experience of believing that my preference should motivate me, and creating elaborate explanations for why it doesn't.

I wonder if you have to feel like your mind's in dysfunction before the things pjeby writes seem plausible -- that is, maybe the inferential distance is too far for a properly working mind.

Agree. And pjeby's comments are long which makes it a little tedious for me to scroll past them.

Agree. The post about backing your advice with concrete experimental results was directed at PJ Eby to some extent, as well as the part about best finding true causal models by looking at what a majority of the scientists in a given field use. This is what makes PJ Eby's advice less useful than what we're accustomed to.

concrete experimental results ... finding true causal models by looking at what a majority of the scientists in a given field use

Then the widespread use of NLP-based approaches in marketing and pickup -- both fields that demand actual performance in motivating the behavior of strangers -- should be suggestive of what sort of model you should be looking at when you want to study applied motivational psychology. It is definitely "less wrong" than the hideously naive theories of mind in use in current cog-psych literature, or for that matter, most pop-psych literature -- they sadly reflect each other to a large extent.

Truly modern ideas in cog psych and neuro psych (like the "somatic marker hypothesis") are only now (in the 21st century) catching up to things that were in NLP Volume I almost 30 years ago. See e.g.: this recent keynote cog psych paper compared to what the earliest NLP books have to say about the physiology of emotional states. Mainstream psychology has barely started to catch up, while NLP hasn't stayed still.

Of course, depending on your definitions of "true" and "experimental", you may have to wait another 30 years or so for mainstream psych research to catch up with models that actually work.

the widespread use

Does not count for as much as a formal experiment, and if a formal experiment done correctly and with good statistics fails to confirm the claim, it overrides all evidence from widespread use.

I'm inclined to take practice within marketing, and probably also within pick-up, fairly seriously (though, yes, less seriously than well-done sequences of formal experiments). These are communities that track results, care about results, and make some efforts at rationality and experiment.

Are there formal experiments that indicate the claims are false? What particular claims are in dispute?

Most of the "evidence" presented in favor of NLP is presented by people who have very strong financial incentives to get you to continue to buy their NLP stuff.

Now, line-by-line:

I don't think preference directly influences motivation, so much as preference just represents the desires of our idealized self-image. That is, it's what we want to be motivated by.

What I mean by preference is a valuation of how I want the world to be. It's not about cognitive ritual, although cognitive ritual, as a part of the world, may also be mentioned there. Preference is not the sort of thing that does anything, it is a statement of what I think should be done. Through the activity of the mind, the way preference is may influence other things, and conversely, other things may influence preference, as in the case of wireheading, for example. (This is an invitation to synchronize the definitions.)

Some motivations we're fine with identifying as part of our self-image, so they are also our preferences. But this is not a case of preference influencing motivation; in fact, it's the other way around!

This is a statement in connotation opposite to one that triggered my comment in the first place, see here. How do you recognize which motivations you choose to identify with, and which you don't? I guess in this model, the criterion may be said to derive from that very preference stuff.

In general, I say that akrasia is really anosognosia of the will: i.e. believing that your preference should motivate you, and creating elaborate explanations for why it doesn't, when in fact it's barely related.

How is this a fact? From my perspective, we are groping in the dark at this point, so any statement should either be intuitive, as raw material to build upon, generated from being primed by a representative sample of data, to give any chance of showing true regularities, or study the few true regularities that can be supported.

I don't understand the relation between preference, motivation, shouldness, influence, and facts that you are making in the above quoted sentence.

Motivations exist in a different schema, and have to be created, modified, or deleted independently of our preferences.

What's a 'schema', what kind of object is this motivation thing that can be created or deleted? Are there many of them if one can be created and deleted? What role do they play in the cognitive algorithm? If there is no relation between preferences and these motivation instances, what is the role played respectively by preference and emotion in the overall algorithm?

What I mean by preference is a valuation of how I want the world to be. It's not about cognitive ritual, although cognitive ritual, as a part of the world, may also be mentioned there. Preference is not the sort of thing that does anything, it is a statement of what I think should be done. Through the activity of the mind, the way preference is may influence other things, and conversely, other things may influence preference, as in the case of wireheading, for example.

I don't understand what you mean by "cognitive ritual".

This is a statement in connotation opposite to one that triggered my comment in the first place, see here. How do you recognize which motivations you choose to identify with, and which you don't? I guess in this model, the criterion may be said to derive from that very preference stuff.

I couldn't make heads or tails of that comment, sorry. I'm not entirely sure I understand what you wrote here, either, except that it sounds like you think we "choose" to identify with things. My observation is that choice is not the default -- we have the ability to choose, but mostly, we don't use it, and when we think we are, we are mostly lying to ourselves.

This doesn't much connect to standard theories or intuition, for the same reason that relativity doesn't: it's correct over a wider range of conditions than our default intuitions. If you view minds through a mechanical lens, their behaviors don't require such complex explanations.

How is this a fact? From my perspective, we are groping in the dark at this point, so any statement should either be intuitive, as raw material to build upon, generated from being primed by a representative sample of data, to give any chance of showing true regularities, or study the few true regularities that can be supported.

I say that it's a fact our preferences are barely related to our motivations because it's trivial to show that they function independently -- you've pointed this out yourself. That most people fail to change their motivation by modifying their preferences is more than sufficient to demonstrate the lack of connection in practice between these two brain functions. (See also the near/far distinction.)

I don't understand the relation between preference, motivation, shouldness, influence, and facts that you are making in the above quoted sentence.

By "should" I mean, expecting that merely having a preference will automatically mean you have corresponding motivation, or that the lack of ability to enforce your preference over your motivation equals a personal failure -- it merely reflects the "design" parameters of the systems involved. There is no evolutionary reason for us to control over our motivations, since they exist to control us -- to shape us to the world we find ourselves in.

What's a 'schema', what kind of object is this motivation thing that can be created or deleted?

By schema here, I'm referring to "near" vs. "far" thinking. Action versus abstraction.

A motivation is simply an emotional response attached to an outcome or behavior, through conditioning or simple association.

Are there many of them if one can be created and deleted?

Yep.

What role do they play in the cognitive algorithm?

They drive the planning process, which we experience as "motivation". See, for example, the video I and others have linked here before, which demonstrates how to induce a (temporary) motivation state to clean your desk. There is a lot of deep theory behind that video, virtually none of which is present in the video.

If there is no relation between preferences and these motivation instances, what is the role played respectively by preference and emotion in the overall algorithm?

If you really care about deep understanding of the "cognitive algorithm", you would be well advised to read "NLP Volume I", which explains the model I use quite well. As its subtitle calls it, "the study of the structure of subjective experience" -- i.e., what algorithms feel like from the inside.

The motivation video I made demonstrates one simple algorithm ("strategy" in NLP lingo) that is conveyed in terms of sensory representation ("near" thinking) steps. This is because most of our actual cognitive processing consists of manipulating sensory data, both in and out of consciousness. Verbal processing gives us flexibility and suggestibility, but a huge part of our outward verbalization is devoted to making up plausible explanations and things that make us sound good. And it is driven by our motivations (including hard-wired and trained status motivations), rather than being a source of motivations.

The distinction can be easily seen in my video, as I demonstrate using verbal thinking merely to suggest and "lead" the near system to evoke certain sensory representations in visual and kinesthetic form, rather than by trying to "talk" one's self into doing something through logic or slogan.

Btw, a lot of the disconnect that you're experiencing from my writing is simply that if you care more about theory than practice, you need to read a hell of a lot more than what I write, to understand what I'm writing about.

I've been studying NLP in my spare time for around 20 years now, and there is absolutely no way I can teach that entire field of study in off-hand comments. Since most people are more interested in practice than theory, I focus my writing to have the least amount of theory that's needed to DO something, or at least come to an understanding of why what you're already doing doesn't work.

If you insist on implementation-quality theory, and you don't "get" representational systems and strategies as the primary model of all behavior (internal as well as external), you're not going to "get" what I write about, because I presuppose that that model is the closest thing we have to a functional theory of mind, from a practical-results perspective. There is nothing in mainstream cognitive psychology that remotely approaches the usefulness of NLP as a model of subjective experience and behavior, which likely means there's nothing approaching its accuracy as an operational model.

(Disclaimer: Popular depictions of NLP are ridiculously shallow, so anyone who hasn't read "NLP, Volume I" or "The Structure Of Magic I", stands a very strong chance of not even remotely knowing what NLP actually is. Even some supposedly-certified "practitioners" have no clue, treating the theory as something they just had to learn to get their certificate, alas. Having a bit more epistemic hygiene probably would be helpful to the discipline as a whole... but then, you can say that about most fields.)

I have copies of The Structure of Magic, Volumes I and II (Hardcover, 1975) to give away. If you want them, please contact me privately. Preference given to those who will either travel to my home in San Rafael, CA, to pick them up or who will attend the next OB/LW meetup in the Bay Area (because then I do not have to pay shipping costs).

The fact that I own the volumes should not be taken as endorsement of them. In fact, I tend to suspect that Eliezer and those about as smart, knowledgable and committed to understanding intelligence are better off not wasting their time on NLP and that they should stick to ev psy and hard cognitive science and neuroscience instead.

What I mean by preference is a valuation of how I want the world to be. It's not about cognitive ritual, although cognitive ritual, as a part of the world, may also be mentioned there. Preference is not the sort of thing that does anything, it is a statement of what I think should be done. Through the activity of the mind, the way preference is may influence other things, and conversely, other things may influence preference, as in the case of wireheading, for example.

I don't understand what you mean by "cognitive ritual".

A particular algorithm, or a property thereof, that your mind currently runs. For example, a cognitive ritual of following causal decision theory to determine your actions may result in two-boxing in Newcomb's problem.

(I'm going to break the response down, to make shorter and more focused comments.)

Eby:
Some motivations we're fine with identifying as part of our self-image, so they are also our preferences. But this is not a case of preference influencing motivation; in fact, it's the other way around!

Nesov:
How do you recognize which motivations you choose to identify with, and which you don't? I guess in this model, the criterion may be said to derive from that very preference stuff.

Eby:
I'm not entirely sure I understand what you wrote here, either, except that it sounds like you think we "choose" to identify with things. My observation is that choice is not the default -- we have the ability to choose, but mostly, we don't use it, and when we think we are, we are mostly lying to ourselves.

I'm not talking about deliberative choice, I'm talking about the determination of semantics: how are the motivations, to which you refer as the ones you identify with, different from the rest? What property makes some motivations belong to one class, and others to another?

how are the motivations, to which you refer as the ones you identify with, different from the rest? What property makes some motivations belong to one class, and others to another?

In short: whether they're congruent with your professed values or ideals. More specifically: do they reflect (or at least not conflict with) the image that you wish to present to others.

Of course, "from the inside", it doesn't feel like the image you wish to present to others, it just feels like something that is "good", or at least not "bad".

That is, if you learn that "good people are honest" (honesty = social value) then you are motivated to appear honest, and you will identify with any motivations you have towards actual honesty. But you may also have motivations that are dishonest... and you will reject those and attribute them to some failure of will, the flesh being weak, etc. etc. in the (evolutionary) hope of persuading others that your error was a temporary failure rather than an accurate portrayal of your behavior.

IOW, motivation is primary, while the identification or disidentification is a secondary function that had to evolve later, after the primary "motivation" machinery already existed.

OK. So, your "preference" is actually Hanson's "image". I see your words in the comment as mostly confirming this. Do you hold additional distinctions?

My preference in about your own self-interest (which however need not be about you), and I suspect that, unified with my usage of term "motivation", you bundle them both in your usage of "motivation". Does it sound correct to you?

OK. So, your "preference" is actually Hanson's "image". I see your words in the comment as mostly confirming this. Do you hold additional distinctions?

None that are directly relevant to the present context, no. (I do hold more near/far distinctions in general.)

My preference in about your own self-interest (which however need not be about you), and I suspect that, unified with my usage of term "motivation", you bundle them both in your usage of "motivation". Does it sound correct to you?

Not particularly. I'm using motivation largely to refer to the somatic markers (physio-emotional responses) keyed to actions or goal subjects, irrespective of any verbal explanations associated with those markers.

To put it another way, I understand a motivation to be a feeling about a concrete behavioral goal, regardless of how you came to have that feeling. A preference is a feeling about an abstract goal, as opposed to a concrete one.

So, "I prefer excitement over boredom" is distinct from "I am motivated to go rock-climbing today". The former is an abstraction, and can be developed either top-down (e.g. through learning that excitement is a socially-valued qualtiy) or bottom up (summarization of prior experience or motivations).

However, even if it is derived by summarizing motivations, the preference is merely descriptive, not prescriptive. It can lead me to consciously try a new "exciting" behavior, but if I turn out not to like it, I will not still be motivated to carry out that behavior.

So, our preferences can lead us to situations that cause us to develop motivations, and we can even have the motivation to try things, because of a preference. We can even develop a motivation based on the feelings a preference may give us -- e.g. a person who believes it is "good" to do a certain thing may be able to develop an inherent motivation for the thing, by feeling that "goodness" in association with the thing. Some people do this naturally for some things, others do not.

(Motivations can also form for reasons opaque to us: I'm still trying to track down what's led me to such marathons of posting on LW this weekend, or at least figure out how to redirect it into finishing writing my book. I've probably written a book in my comments by now!)

I'm using motivation largely to refer to the somatic markers (physio-emotional responses) keyed to actions or goal subjects, irrespective of any verbal explanations associated with those markers.

Empirical, one-step-towards-subjective-from behavioral regularities, stimulus-response pairs. Not "should", "is". Is this correct? (I'm going to stop asking this question, but you should assume that I meticulously add it to every sentence in which I declare something about your statements, as a way of moving towards mutual understanding of terms.)

To put it another way, I understand a motivation to be a feeling about a concrete behavioral goal, regardless of how you came to have that feeling.

This confuses me, since you start using a word "feeling", that has too many connotations, many of them deeper than the no-strings-attached regularity you seem to have just defined "motivations" to be.

A preference is a feeling about an abstract goal, as opposed to a concrete one.

So, there are two kinds of feelings: motivations (empirical stimulus-response pairs), and preferences, whatever that is.

So, "I prefer excitement over boredom" is distinct from "I am motivated to go rock-climbing today". The former is an abstraction, and can be developed either top-down (e.g. through learning that excitement is a socially-valued qualtiy) or bottom up (summarization of prior experience or motivations).

So, preferences and motivations are not fundamentally different in your model, but merely the north and south of abstraction in "feelings", in what the "stimulus-response" pairs are about.

(I drifted off after this point, I need more detailed understanding of the questions above in order to go further.)

Empirical, one-step-towards-subjective-from behavioral regularities, stimulus-response pairs. Not "should", "is". Is this correct?

Yes! Precisely. In NLP this would be referred to as one "step" in a "strategy".

This confuses me, since you start using a word "feeling", that has too many connotations, many of them deeper than the no-strings-attached regularity you seem to have just defined "motivations" to be.

See "Emotions and Feelings: A Neurobiological Perspective" for what I mean by "feelings". This is also the NLP meaning of the term; i.e., Damasio's paper supports the NLP model of subjective experience to this extent.

So, preferences and motivations are not fundamentally different in your model, but merely the north and south of abstraction in "feelings", in what the "effect-response" pairs are about.

Yes, and as such, they lead to different practical effects, causing us to applaud and speak in favor of our abstract preferences, while only acting on concrete motivations.

Preferences only influence our behavior when they become concrete: for example, all those experiments Robin Hanson keeps mentioning about people's donation behavior depending on whether they've been primed about in-group or out-group behaviors. That's basically a situation where "preference" becomes "motivation" by becoming linked to a specific "near" behavior.

In general, preference becomes motivation by being made concrete, grounded in some sensory-specific context. If we have conflicting motivation, but try to proceed anyway, we experience "mixed feelings" -- i.e., dueling somatic responses.

Now, the "stimulus-response" pairs are really predictions. The brain generates these responses that create feelings as a preparation to take action, and/or a marker of "good" or "bad". So if you change your expectation of a situation, your feeling response changes as well.

For example, if I at first enjoy an activity, and then it becomes tedious, I may at some point "go over threshold" (in NLP parlance) and thus conclude that my tedious experiences constitute a better prediction of what will happen the next time I do it. At that point, my mental model is now updated, so I will no longer be motivated to do that activity.

That's how it works, assuming that I "stack" (i.e. mentally combine) each tedious experience together as part of a trend, versus continuing to consider them as isolated experiences.

(It's not the only way to change a motivation, but it's a common one that people naturally use in practice. One NLP intervention for dealing with an abusive relationship, btw, is to teach a person how to mentally stack their representations of the relationship to "change their mind" about staying in it, by stacking up enough representations of past abuse and expected future abuse to create a strong enough feeling response to induce a model change. In general, NLP is not about anything spooky or magical so much as being able to deliberately replicate cognitive processes that other people use or that the same person uses, but in a different context.)

One place where akrasia comes into the picture, however, is when we don't identify with the motivation to be changed, i.e., it's not a preference. If you don't know what you're getting out of something, you can't readily decide that you don't want that thing any more!

This is why most of my interventions for procrastination involve finding out what prediction underlies the feeling response to a specific behavior being sought or avoided. Our feelings almost always occur in response to predictions made by our brains about the likely "near" outcome of a behavior.

These predictions are almost exclusively nonverbal, and brief. They normally "flash" by at subliminal speeds, faster than you can think or speed-read a single word. (Which makes sense, given that the same machinery is likely used to turn words into "meaning"!) You can learn to observe them, but only if you can quiet your verbal mind from "talking over them". But once you see or hear them, you can hold them in consciousness for examination and modification.

It is these unconscious predictions that produce feeling responses, and thereby direct our actions. (They are probably also the basis of "priming".)

When I or my students successfully change these unconscious representations, the corresponding behavioral motivation also changes. If a technique (whether it be one of mine or one of someone else's) does NOT successfully change the representation, the motivation does not change, either. If one technique doesn't work, we try another, until the desired result is achieved.

This is why I don't care much about theory -- if my hypothesis is that technique #1 will change representation X, and I'm mistaken, it only takes another few minutes to try technique #2 or #3. It's catching the representations in the first place that's much harder to do on your own, not the actual application of the techniques. I've gotten pretty good at guessing what techniques work better for what when I do them on other people, but oddly not as much on myself... which suggests that the certainty I appear to have may have more impact than the specific choice of technique.

Is that a placebo effect? Don't know. Don't care. As long as my student can also produce successes through self-application, and get whatever other results they're after, what difference does it make?

I should note, however, that I personally never got any good results from any self-help techniques until I learned to 1) act "as if" a technique would work, regardless of my intellectual opinions about its probability of working, and 2) observe these sorts of unconscious thoughts. So even if it doesn't matter how you change them, the ability to observe them appears to be a prerequisite.

(Also, these sorts of thoughts sometimes come out randomly in talk therapies, journalling, etc. Some particularly good therapists ask questions that tend to bring them out, and the NLP "Structure Of Magic" books were an attempt to explain how those therapists knew what questions to ask, given that the therapists each belonged to completely different schools of therapy. I use an extremely restricted subset of similar questions in my work, since I focus mainly on certain classes of chronic procrastination, and its patterns are very regular, at least in my experience.)

I say that it's a fact our preferences are barely related to our motivations because it's trivial to show that they function independently -- you've pointed this out yourself. That most people fail to change their motivation by modifying their preferences is more than sufficient to demonstrate the lack of connection in practice between these two brain functions.

Being separate is far from the same thing as being independent, or having no connection with each other. It is only grounds for introducing a concept, for making a distinction.

Also, at this point we are entering a territory where our definitions are still at odds, for example I expect that the sense in which I talk about preferences being modified is significantly different from what you mean by that. The place of that discussion is in this linked thread.

As a total sidenote, your choice of examples is bad. If someone solved photosynthesis in a way that output useful engineerable technologies, it would change your life, and the lives of almost everybody else.

Solar power cheap and powerful enough to run most of our technology would be a massive sea change.

No, I don't think the choice of examples is bad - I had another draft where I used understanding the pathogenesis of some common disease as an example, which is even more clearly beneficial.

My point is that even when rational analysis tells us that something will be very useful, the "sense of curiosity" can disagree. Otherwise, we'd all be fascinated by immunology because of its high probability of giving us a cure for cancer and AIDS. Likewise, discovering that Stonehenge was built by aliens would be practically useless unless it provided some way of contacting the aliens or using their technology, but it would still be considered "interesting".

That's why I didn't include "gives a practical benefit" as a criterion. Instead I said "changes a lot of beliefs", which a better understanding of photosynthesis wouldn't, and "teaches you something that other people want to know", which photosynthesis again wouldn't (lots of people would want the improved solar technology, but not many people would care how it worked).

I'm not sure that's true. Lots of people would want to know how to make the improved solar technology, because it would be immensely commercially valuable.

Also, I tend to think people's beliefs about technology, science, and the way to solve problems would change, given a large change in energy infrastructure.

People use pervasive technology or social structures as a metaphor for many things, especially new ideas. Witness how early 20th century theorists use mechanical and hydraulic metaphors in their theories of the body and brain, whereas late 20th century biologists use network, electrical, and systems metaphors that simply didn't exist before.

I agree with Yvain - the pyramid on Mars would radically change our beliefs, make us re-evaluate all of history and archaeology and geology, and reprioritize national science funding.

Yes, that's true. I think I was fighting a rearguard action here, trying to defend my hypothesis. I've changed my votes accordingly. Cheers to you and Yvain.

I know akrasia is more precise, but why don't we call it motivation problems like most people who work on such things do? I can more quickly find information on research to fix 'lack of motivation' than 'akrasia.'

'akrasia' is a behavior and 'lack of motivation' is a hypothesized cause.

[-][anonymous]-10

I expect any Weirdtopia to run mostly on crackpot laws of nature, because they're more fun, and in cases where these laws gave no clear answer due to being mathematically ill-specified, complex arbiters could be constructed to judge.

(Or is fun theory off-topic?)