Reply to: A "Failure to Evaluate Return-on-Time" Fallacy
Lionhearted writes:
[A] large majority of otherwise smart people spend time doing semi-productive things, when there are massively productive opportunities untapped.
A somewhat silly example: Let's say someone aspires to be a comedian, the best comedian ever, and to make a living doing comedy. He wants nothing else, it is his purpose. And he decides that in order to become a better comedian, he will watch re-runs of the old television cartoon 'Garfield and Friends' that was on TV from 1988 to 1995....
I’m curious as to why.
Why will a randomly chosen eight-year-old fail a calculus test? Because most possible answers are wrong, and there is no force to guide him to the correct answers. (There is no need to postulate a “fear of success”; most ways writing or not writing on a calculus test constitute failure, and so people, and rocks, fail calculus tests by default.)
Why do most of us, most of the time, choose to "pursue our goals" through routes that are far less effective than the routes we could find if we tried?[1] My guess is that here, as with the calculus test, the main problem is that most courses of action are extremely ineffective, and that there has been no strong evolutionary or cultural force sufficient to focus us on the very narrow behavior patterns that would actually be effective.
To be more specific: there are clearly at least some limited senses in which we have goals. We: (1) tell ourselves and others stories of how we’re aiming for various “goals”; (2) search out modes of activity that are consistent with the role, and goal-seeking, that we see ourselves as doing (“learning math”; “becoming a comedian”; “being a good parent”); and sometimes even (3) feel glad or disappointed when we do/don’t achieve our “goals”.
But there are clearly also heuristics that would be useful to goal-achievement (or that would be part of what it means to “have goals” at all) that we do not automatically carry out. We do not automatically:
- (a) Ask ourselves what we’re trying to achieve;
- (b) Ask ourselves how we could tell if we achieved it (“what does it look like to be a good comedian?”) and how we can track progress;
- (c) Find ourselves strongly, intrinsically curious about information that would help us achieve our goal;
- (d) Gather that information (e.g., by asking as how folks commonly achieve our goal, or similar goals, or by tallying which strategies have and haven’t worked for us in the past);
- (e) Systematically test many different conjectures for how to achieve the goals, including methods that aren’t habitual for us, while tracking which ones do and don’t work;
- (f) Focus most of the energy that *isn’t* going into systematic exploration, on the methods that work best;
- (g) Make sure that our "goal" is really our goal, that we coherently want it and are not constrained by fears or by uncertainty as to whether it is worth the effort, and that we have thought through any questions and decisions in advance so they won't continually sap our energies;
- (h) Use environmental cues and social contexts to bolster our motivation, so we can keep working effectively in the face of intermittent frustrations, or temptations based in hyperbolic discounting;
.... or carry out any number of other useful techniques. Instead, we mostly just do things. We act from habit; we act from impulse or convenience when primed by the activities in front of us; we remember our goal and choose an action that feels associated with our goal. We do any number of things. But we do not systematically choose the narrow sets of actions that would effectively optimize for our claimed goals, or for any other goals.
Why? Most basically, because humans are only just on the cusp of general intelligence. Perhaps 5% of the population has enough abstract reasoning skill to verbally understand that the above heuristics would be useful once these heuristics are pointed out. That is not at all the same as the ability to automatically implement these heuristics. Our verbal, conversational systems are much better at abstract reasoning than are the motivational systems that pull our behavior. I have enough abstract reasoning ability to understand that I’m safe on the glass floor of a tall building, or that ice cream is not healthy, or that exercise furthers my goals... but this doesn’t lead to an automatic updating of the reward gradients that, absent rare and costly conscious overrides, pull my behavior. I can train my automatic systems, for example by visualizing ice cream as disgusting and artery-clogging and yucky, or by walking across the glass floor often enough to persuade my brain that I can’t fall through the floor... but systematically training one’s motivational systems in this way is also not automatic for us. And so it seems far from surprising that most of us have not trained ourselves in this way, and that most of our “goal-seeking” actions are far less effective than they could be.
Still, I’m keen to train. I know people who are far more strategic than I am, and there seem to be clear avenues for becoming far more strategic than they are. It also seems that having goals, in a much more pervasive sense than (1)-(3), is part of what “rational” should mean, will help us achieve what we care about, and hasn't been taught in much detail on LW.
So, to second Lionhearted's questions: does this analysis seem right? Have some of you trained yourselves to be substantially more strategic, or goal-achieving, than you started out? How did you do it? Do you agree with (a)-(h) above? Do you have some good heuristics to add? Do you have some good ideas for how to train yourself in such heuristics?
[1] For example, why do many people go through long training programs “to make money” without spending a few hours doing salary comparisons ahead of time? Why do many who type for hours a day remain two-finger typists, without bothering with a typing tutor program? Why do people spend their Saturdays “enjoying themselves” without bothering to track which of their habitual leisure activities are *actually* enjoyable? Why do even unusually numerate people fear illness, car accidents, and bogeymen, and take safety measures, but not bother to look up statistics on the relative risks? Why do most of us settle into a single, stereotyped mode of studying, writing, social interaction, or the like, without trying alternatives to see if they work better -- even when such experiments as we have tried have sometimes given great boosts?
Not quite - I'm also saying that people's choice of words is rarely random or superficial, and tends to reflect the deeper processes by which they are reasoning... and vice versa. (i.e., the choice of words tends to have non-random, non-superficial effects on the thinking process).
Note that how a question is phrased makes a big difference to survey results, so if you think this somehow doesn't apply to you, then you are mistaken.
It only feels like such things don't apply to ourselves, like the people in the "Mindless Eating" popcorn experiments who insist that the size of the popcorn container had nothing to do with how much they ate. They (and you) only think this because of the limited point of view from which the observation is made.
Of course - for the same reason that people don't think the size of the container makes any difference to how much they eat. It's easy to write off unconscious influences.
That being said, choice of questions makes a big difference to answers, but it's not solely a matter of priming. After all, if you use the words "What do I want?" and go on internally translating that in the same way as you asked, "What will make me happy?", then of course nothing will change!
So, it's not merely the surface linguistics that matter, but the deep structure of how you ask yourself, and the kind of thinking you intend to apply. Based on the challenge you described, my guess is that the surface structure of your questions is in fact a reflection of how you're doing the questioning... because for most people, most of the time, it is.
The reason I quoted "I'm mostly limited" is because I wanted to highlight that the thought process you appeared to be using was one in which you already assume you're limited, before you even know what it is that you want! (It sounded to me as though you were implying that it doesn't matter if you know what you want, because you're not really going to get it anyway -- and that wasn't just from that one phrase; that was just the easiest one to highlight.)
This sort of assumption is not a trivial matter; it is inherent to how we limit ourselves. When we make an assumption, our brains do not challenge the assumption, they instead filter out disconfirming evidence. That applies even to things like thinking you're not good at knowing what you want!
Social constraints aren't that important, since people with the appropriate programming can work around them. And choosing effective questions to ask yourself falls under the heading of "programming", in the verb sense of the word.
I have tons of "programming" tricks, especially ones for removing social programming. Teaching them, however, is a non-trivial task, for reasons I've explained here before.
One of the key problems is that people confabulate things and then deny having done so. Alicorn's notion of "luminosity" is closely akin to the required skill, but it is very easy for people to convince themselves they are doing it when they are actually not even close. What's more, unless somebody is seriously motivated to learn, they won't be able to pick it up from a few text comments.
(Contra to MoR!Harry's statement that admitting you're wrong is the hardest thing to learn, IMO the hardest thing to learn is to take seriously the idea that you don't already know the answers to what's going on in your head... on an emotional and experiential basis, rather than merely an intellectual abstraction that you don' t really believe. Or, to put it another way, most people claim to "believe" the idea, while still anticipating as if they already know how things in their head work.)
Anyway, for that reason, I mostly don't bother discussing such things on LW in the abstract, as it quickly leads to attempts to have an intellectual discussion about experiential phenomena: dancing about architecture, so to speak.
Instead, I usually try to limit myself to throwing out cryptic hints so that people with the necessary motivation and/or skill can reconstruct the bigger picture for themselves, a bit like Harry and the "42" envelope. ;-)
While it's true that I can't rule out things that I can't detect, I can't really believe in them, either.
I understand where you're coming from. You've tried much harder than most people do to understand your own emotions and motivations, and you're pretty sure you've actually done so. I agree that there are many people who think they have, but haven't. Similarly, sometimes people think they're really trying, but aren't.
I'm impressed with how much you know about my thoughts :)
I won't suggest that we're fundamentally different in any way, but I do so... (read more)