Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Cultivate the desire to X

3 Elo 07 March 2016 03:40AM

Recently I have found myself encouraging people to cultivate the desire to X.

Examples that you might want to cultivate interest in include:

  • Diet
  • Organise ones self
  • Plan for the future
  • be a goal-oriented thinker
  • build the tools
  • Anything else in the list of common human goals
  • Getting healthy sleep
  • Being less wrong
  • Trusting people more
  • Trusting people less
  • exercise
  • interest in a topic (cars, fashion, psychology etc.)

Why do we need to cultivate?

We don't.  But sometimes we can't just "do".  Lot's of reasons are reasonable reasons to not be able to just "do" the thing:

  • Some things are scary
  • Some things need planning
  • Some things need research
  • Some things are hard
  • Some things are a leap of faith
  • Some things can be frustrating to accept
  • Some things seem stupid (well if exercising is so great why don't I automatically want to do it)
  • Other excuses exist.

On some level you have decided you want to do X; on some other level you have not yet committed to doing it.  Easy tasks can get done quickly.  More complicated tasks are not so easy to do right away.

Well if it were easy enough to just successfully do the thing - you can go ahead and do the thing (TTYL flying to the moon tomorrow - yea nope.).

  1. your system 1 wants to do the thing and your system 2 is not sure how.
  2. your system 2 wants to do the thing and your system 1 is not sure it wants to do the thing.  
  • The healthy part of you wants to diet; the social part of you is worried about the impact on your social life.

(now borrowing from Common human goals)

  • Your desire to live forever wants you to take a medication every morning to increase your longevity; your desire for freedom does not want to be tied down to a bottle of pills every morning.
  • Your desire for a legacy wants you to stay late at work; your desire for quality family time wants you to leave the office early.

The solution:

The solution is to cultivate the interest; or the desire to do the thing. From the initial point of interest or desire - you can move forward; do some research to either convince your system 2 of the benefits, or work out how to do the thing to convince your system 1 that it is possible/viable/easy enough.  Or maybe after some research the thing seems impossible.  I offer Cultivating the desire as a step along the way to working it out.

Short post for today; Cultivate the desire to do X.


Meta: time to write 1.5 hours.

My table of contents contains my other writing

feedback welcome

On desiring subjective states (post 3 of 3)

7 torekp 05 May 2015 02:16AM

Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes.  Meanwhile her right hand is acclimating to a bucket of ice water.  Then she plunges both hands into a bucket of lukewarm water.  The lukewarm water feels very different to her two hands.  To the left hand, it feels very chilly.  To the right hand, it feels very hot.  When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn't know.  Asked to guess, she's off by a considerable margin.

water-hot-cold

Next Carol flips the thermocouple readout to face her (as shown), and practices.  Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures.  Now she makes a guess - starting with a random hand, then moving the other one and revising the guess if necessary - each time before looking at the thermocouple.  What will happen?  I haven't done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.

We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual.  "What do you think the temperature is?" we ask.  She moves her cold hand first.  "Feels like about 20," she says.  Hot hand follows.  "Yup, feels like 20."

"Wait," we ask. "You said feels-like-20 for both hands.  Does this mean the bucket no longer feels different to your two different hands, like it did when you started?"

"No!" she replies.  "Are you crazy?  It still feels very different subjectively; I've just learned to see past that to identify the actual temperature."

In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment.  Let's tentatively call these states Subjectively Identified Aspects of Perception (SIAPs).  Even though these states aren't strictly necessary to know what's going on in the environment - Carol's example shows that the sensation felt by one hand isn't necessary to know that the water is 20 C, because the other hand knows this via a different sensation - they still matter to us.  As Eliezer notes:


If I claim to value art for its own sake, then would I value art that no one ever saw?  A screensaver running in a closed room, producing beautiful pictures that no one ever saw?  I'd have to say no.  I can't think of any completely lifeless object that I would value as an end, not just a means.  That would be like valuing ice cream as an end in itself, apart from anyone eating it.  Everything I value, that I can think of, involves people and their experiences somewhere along the line.

The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.


Subjectivity matters.  (I am not implying that Eliezer would agree with anything else I say about subjectivity.)

Why would evolution build beings that sense their internal states?  Why not just have the organism know the objective facts of survival and reproduction, and be done with it?  One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts.  But another is that this separation of sense-data into "subjective" and "objective" might help us learn to overcome certain sorts of perceptual illusion - as Carol does, above.  And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction - like pain, or feelings of erotic love.  This last hypothesis could explain why we value some subjective aspects so much, too.

Different SIAPs can lead to the same intelligent behavioral performance, such as identifying 20 degree C water.  But that doesn't mean Carol has to value the two routes to successful temperature-telling equally.  And, if someone proposed to give her radically different, previously unknown, subjectively identifiable aspects of experience, as new routes to the kinds of knowledge she gets from perception, she might reasonably balk.  Especially if this were to apply to all the senses.  And if the subjectively identifiable aspects of desire and emotion (SIADs, SIAEs) were also to be replaced, she might reasonably balk much harder.  She might reasonably doubt that the survivor of this process would be her, or even human, in any sense meaningful to her.

Would it be possible to have an intelligent being whose cognition of the world is mediated by no SIAPs?  I suspect not, if that being is well-designed.  See above on "why would evolution build beings that sense internal states."

If you've read all 3 posts, you've probably gotten the point of the Gasoline Gal story by now.  But let me go through some of the mappings from source to target in that analogy.  A car that, when you take it on a tour, accelerates well, handles nicely, makes the right amount of noise, and so on - one that passes the touring test (groan) - is like a being that can identify objective facts in its environment.  An internal combustion engine is like Carol's subjective cold-sensation in her left hand - one way among others to bring about the externally-observable behavior.  (By "externally observable" I mean "without looking under the hood".)  In Carol's case, that behavior is identifying 20 C water.  In the engine's case, it's the acceleration of the car.  Note that in neither case is this internal factor causally inert.  If you take it away and don't replace it with anything, or even if you replace it with something that doesn't fit, the useful external behavior will be severely impaired.  The mere fact that you can, with a lot of other re-working, replace an internal combustion engine with a fuel cell, does not even begin to show that the engine does nothing.

And Gasoline Gal's passion for internal combustion engines is like my - and I dare say most people's - attachment to the subjective internal aspects of perception and emotion that we know and love.  The words and concepts we use for these things - pain, passion, elation, for some easier examples - refer to the actual processes in human beings that drive the related behavior.  (Regarding which, neurology has more to learn.)  As I mentioned in my last post, a desire can form with a particular referent based on early experience, and remain focused on that event-type permanently.  If one constructs radically different processes that achieve similar external results, analogous to the fuel cell driven car, one gets radically different subjectivity - which we can only denote by pointing simultaneously to both the "under the hood" construction of these new beings, and the behavior associated with their SIAPs, together.

Needless to say, this complicates uploading.

One more thing: are SIAPs qualia?  A substantial minority of philosophers, or maybe a plurality, uses "qualia" in a sufficiently similar way that I could probably use that word here.  But another substantial minority loads it with additional baggage.  And that leads to pointless misunderstandings, pigeonholing, and straw men.  Hence, "SIAPs".  But feel free to use "qualia" in the comments if you're more comfortable with that term, bearing my caveats in mind.

The language of desire (post 2 of 3)

1 torekp 03 May 2015 09:57PM

To the extent that desires explain behavior, it is primarily by meshing with beliefs to favor particular actions.  For example, if I desire to lose 5 lbs, and I believe that exercising a half hour per day will cause me to lose 5 lbs, then this belief-desire pair makes it more likely that I will exercise.  Beliefs have semantic content.  In order to explain an action as part of a belief-desire pair, a desire must also have semantic content: one that in some sense "matches" a relevant part of the belief.  In the example, the matching semantic content is "to lose 5 lbs".

Of course, desires can also explain some behaviors without functioning as part of a belief-desire pair.  I might want something so badly, I start trembling.  No beliefs are required to explain the trembling.  Also, notably, desires usually (but not always) feel like something.  We gesture in the vague direction of these feelings by talking about "a burning desire", or "thirst" (for something that is not a drink), etc.  In these ways, "desire" is a richer concept than what I am really after, here.  That's OK, though; I'm not trying to define desire.  Alternatively, we can talk about "values", "goals", or "utility functions" - anything that interacts with beliefs, via semantics, to favor particular actions.  I will mostly stick to the word "desire", but nothing hangs on it.

So how does this "semantics" stuff work?

Let me start by pointing to the Sequences.  EY explains it pretty well, with some help by pragmatist in the comments.  Like EY, I subscribe to the broad class of causal theories of mental content.  For our purposes here, we need not choose among them.

The reference of the concepts involved in desires (or goals) is determined by their causal history.  To take a simple example, suppose Allie has a secret admirer.  It's Bob, but she doesn't know this.  Bob leaves her thoughtful gifts and love letters, which Allie appreciates so much that she falls in love with "my secret admirer".  She tells all her friends, "I can't wait to meet my secret admirer and have a torrid affair!"  Allie's desire refers to Bob, because Bob is the causal source of all the gifts and love letters, which in turn caused Allie's relevant thoughts and desires.

We could have told the story differently, in a way that made the reference of "my secret admirer" doubtful, or even hopeless.  We could have had many secret admirers, or maybe some pranksters, leaving different gifts and notes at different times, with Allie mistakenly attributing all to one source.  But that would be mean, and in this context pointless.  Let's not tell that story.

Kaj_Sotala brings up another point about desire, which I'd like to quote at length:


In most artificial RL [reinforcement learning] agents, reward and value are kept strictly separate. In humans (and mammals in general), this doesn't seem to work quite the same way. Rather, if there are things or behaviors which have once given us rewards, we tend to eventually start valuing them for their own sake. If you teach a child to be generous by praising them when they share their toys with others, you don't have to keep doing it all the way to your grave. Eventually they'll internalize the behavior, and start wanting to do it. One might say that the positive feedback actually modifies their reward function, so that they will start getting some amount of pleasure from generous behavior without needing to get external praise for it. In general, behaviors which are learned strongly enough don't need to be reinforced anymore (Pryor 2006).


A desire can form, with a particular referent based on early experience, and remain focused on that object or event-type permanently.  That's a point I will be making much hay of, in my third and final post in this mini-sequence.  I think it explains why Gasoline Gal's desire is not irrational - and neither are some desires many people have on my real target subject.

Innovation's low-hanging fruits: on the demand or supply sides?

3 Stuart_Armstrong 25 February 2014 02:58PM

Cross-posted at Practical Ethics.

This is an addendum to a previous post, which argued that we may be underestimating the impact of innovation because we have so much of it. I noted that we underestimated the innovative aspect of the CD because many other technologies partially overlapped with it, such as television, radio, cinema, ipod, walkman, landline phone, mobile phone, laptop, VCR and Tivo's. Without these overlapping technologies, we could see the CD's true potential and estimate it higher as an innovation. Many different technologies could substitute for each other.

But this argument brings out a salient point: if so many innovations overlap or potentially overlap, then there must be many more innovations that purposes for innovations. Tyler Cowen made the interesting point that the internet isn't as innovative as the flushing toilet (or indeed the television). He certainly has a point here: imagine society without toilets or youtube, which would be most tolerable (or most survivable)?

continue reading »

No Value

18 Raiden 05 May 2012 10:38PM

I am still quite new to LW, so I apologize if this is something that has been discussed before (I did try and search).

I would't normally post such a thing, as I try not to make a habit of complaining my problems to others, but a solution to this would likely benefit other rationalists (at least that's the excuse I made to myself).

Essentially, I am currently in a psychological state in which I simply have no strong values. There is no state of the world that I can imagine the world being in that generates a strong emotional reaction. Ever. In fact, I rarely experience strong emotions at all. When I do, I savor them whether they're positive or negative. I do have some preferences; I would somewhat prefer the world to be some ways than others, but never strongly. I prefer to feel pleasure rather than pain; I prefer the world to be a good place than a bad one, but not by much. Even my desire to have values seems to be a mere preference in much the same way. I have nothing to protect.

Is there any good solution to this?