Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bad reasons for a rationalist to lose

30 Post author: matt 18 May 2009 10:57PM

Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

  • We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
  • We need some likelihood estimates:
    • Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
    • Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
    • Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
    • Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
      (can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
    • Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
    • Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
    • Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
    • What else do we need to know?
  • We need some time/cost estimates (these will vary greatly by proposed brain hack):
    • Time required to stage a personal experiment on the hack: ?
    • Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
    • What else do we need?

… and, what don't we need?

  • A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

Comments (73)

Comment author: AnnaSalamon 19 May 2009 05:16:25AM *  13 points [-]

It might be worth separating the claim "Eliezer is wrong about what changes he, personally, should try" from the claim

"It is generally good to try many plausible changes, because:

  1. Some portion will work;
  2. Trying the number of approaches it takes to find an improvement is often less expensive than being stuck in the wrong local optimum;
  3. Many of us humans tend to keep on doing the same old thing because it's easy, comfortable, safe-feeling, or automatic, even when sticking with our routines is not the high-expected-value thing to do. We can benefit from adopting heuristics of action and experimentation to check such tendencies.”

The second claim seems fairly clearly right, at least for some of us. (People may vary in how easily they can try on new approaches, and on what portion of handed-down approaches work for them. OTOH, the ability to easily try new approaches is itself learnable, at least for many of us.) The first claim is considerably less clear, particularly since Eliezer has much data on himself that we do not, and since after trying many hacks for a given not-lightcone-destroying problem without any of the hacks working, expected value calculations can in fact point to directing one’s efforts elsewhere.

Maybe we could abandon Eliezer’s specific case, and try to get into the details of: (a) how to benefit from trying new approaches; and (b) what rules of thumb for what to try, and what to leave alone, yield high expected life-success?

Comment author: tut 25 May 2009 02:57:03PM 1 point [-]

One more reason for the list is that doing new stuff (or doing stuff in new ways, but I repeat myself) promotes neurogenesis.

Comment author: zaph 19 May 2009 12:53:52PM 3 points [-]

I think that if there was such a straightforward hack like EY was looking for, he would know about it already. I just don't really believe that a hack like that exists, based on my admittedly meager readings in experimental psychology. Further, I think the idea of a "mind hack" is a cute metaphor, it can be misguided. Computer hackers literally create code that directs processes. We can at best manipulate our outside environment in ways that we hope will affect what is still a very mysterious brain. What EY's looking for would be the result of a well-funded and decades long research project. Unless there truly is a Dharma Initiative looking into these things while staying behind the scenes, I don't think there's going to be a journal article that will provide the profound insight he's looking to fin.

I do want to mention something about Seth Robers, which he sort of casually mentions in the Shangri-La diet. He wrote something along the lines that he was eating much less frequently, eating probably one full meal a day. That's something referred to as intermittent fasting. What the Shangri-La Diet book misses, I would postulate, is how Seth used the no flavor calories to transition to that kind of diet. IF is something being suggested as a way to control calories because people's bodies cue hunger to when their accustomed to eating. If you aren't accustomed to eating, you eat a bit less (since you're only filling your stomach the once, or so goes the idea). I certainly don't think I have the complete picture from noticing that on how diets should now be constructed. But I do feel that Seth Robers, attentive as he is, did not fully consider all the changes he had made, and was considering he reduced meal frequency solely as an aftereffect. In writing his popular book, he did not consider all the hacks that he had put into place for himself.

Akrasia-conquerors will need to find the ways to win against their lesser but still powerful drives. Teachers of akrasia-conquering will need to be able to honestly detail everything that they did, which will probably entail very keen observers as peers and students. The need for a perfect system to be in place before on attempts to overcome akrasia is an example of akrasia.

Comment author: pjeby 19 May 2009 02:38:40AM 14 points [-]

Awesomely summarized, so much so that I don't know what else to say, except to perhaps offer this complementary anecdote.

Yesterday, I was giving a workshop on what I jokingly call "The Jedi Mind Trick" -- really the set of principles that makes monoidealism techniques (such as "count to 10 and do it") either work or not work. Towards the end, a woman in the group was having some difficulty applying it, and I offered to walk through an example with her.

She picked the task of organizing some files, and I explained to her what to say and picture in her mind, and asked, "What comes up in your mind right now?"

And she said, "well, I'm on a phone call, I can't organize them right now." And I said "Right, that's standard objection #1 - "I'm doing something else". So now do it again..." [I repeated the instructions]. "What comes to mind?"

She says, "Well, it's that it'll be time to do it later".

"Standard objection #2: it's not time right now, or I don't have enough time. Great. We're moving right along. Do it again. What comes to mind?"

"Well, now I'm starting to see more of what I'd actually be doing if I were doing it, the visualization is getting a lot clearer."

"Terrific, do it again. Now, don't try to actually do the task, just pay attention to what you're seeing and feeling, and you may begin to notice some of your muscles beginning to respond, like they're trying to actually do some of the things you're picturing, like starting to twitch..."

And she burst out laughing, because, she said, her legs had already started twitching and she was feeling like, "well, the files are right over there we could just go and get started..."

Had she given up at standard objection #1 or #2, she wouldn't have learned the technique or gotten the result. But it's not the content of the objection that matters, it's that ANY objection that stops you from actually trying something useful, means you fail. You lose. You are not being a smart, rational skeptic, you're being a dumbass loser.

In the workshop, I explained how our own objections and doubts are also doing the Jedi Mind Trick... but on US. "It's not time now..." they say, and like a hypnotized stormtrooper we nod and agree, "It's not time now." And it doesn't matter if those doubts are saying, "It's not time now" or "It's not peer-reviewed" -- because you still lose, either way.

However, if you simply ignore those doubts and objections, and continue what you're doing, they cannot stop you. If the objection you think is real, is in fact real, well, then you've only lost a little time by trying. But if you believe an objection that isn't real, then you've lost much, much more than that.

Much of the time, the primary function of a (good) personal coach or teacher -- whether in pickup, personal development, or even business and marketing! -- is simply to drag someone (kicking and screaming, if necessary) past their objections into actually doing something the teacher or coach already knows will work.

And when that happens, what the student usually finds is that it isn't really as hard as they thought it would be, or that, yes, that crazy mumbo-jumbo actually works, no matter how irrational it might have sounded before they had any personal point of reference.

The woman on the call only needed about two minutes, to try a technique four times in a row and get a result. If she'd been doing it on her own, she might have given up after only one try. And a lot of folks on LW would likely not have tried even that once!

On LW, I mostly bide with polite patience those people who talk about the stuff I teach as if it's a matter of variation from person to person as to whether stuff works, or that things sometimes work and sometimes not, or whatever, blah blah fudge factor nonsense they individually prefer. That's all well and good here, because those people are not my clients.

But if I were to accept that sort of bullshit from one of my clients, then I would have failed them. It's all very well and good for the client to come to me believing that his or her problems are special and unique and that, in all the world, they are the worst person ever at doing something. But if they leave me still thinking that, then I have not done my job.

My job is to say, fuck that bullshit. Do this. No, not that, this. Good. Do it again. Again. That's better. Now do this.

Dunno about rationality, but ISTM that's how a dojo is actually supposed to work. If the master sat there listening to people's inane theories about how they need to punch differently than everybody else, or their insistence that they really need to understand a complete theory of combat, complete with statistical validation against a control group, before they can even raise a single fist in practice, that master would have failed their students AND their Art.

Just as EY fails his students and his art by the public positions he has taken on his weight and akrasia. To fail at solving those problems is fine. To excuse his failure to even try is not, even by the rules of his own art.

(And remember, "I don't have time" is just standard objection #2.)

Comment author: Annoyance 19 May 2009 02:18:37PM *  7 points [-]

Excellent comment. I have only two objections. First, this statement:

But it's not the content of the objection that matters, it's that ANY objection that stops you from actually trying something useful, means you fail. You lose.

is good on its merits, but I caution everyone to be careful about asserting that some technique or other is "something useful". There are plenty of reasons not to try any random thing that enters into our heads, and even when we're engaged in a blind search, we shouldn't suspend our evaluative functions completely, even though they may be assuming things that blinds us to the solution we need. They also keep us from chopping our legs off when we want to deal with a stubbed toe.

My second objection deals with the following:

If the master sat there listening to people's inane theories about how they need to punch differently than everybody else, or their insistence that they really need to understand a complete theory of combat, complete with statistical validation against a control group, before they can even raise a single fist in practice, that master would have failed their students AND their Art. ust as EY fails his students and his art by the public positions he has taken on his weight and akrasia.

What grounds are there for assigning EY the status of 'master'? Hopefully in a martial arts dojo there are stringent requirements for the demonstration of skill before someone is put in a teaching position, so that even when students aren't personally capable of verifying that the 'master' has actually mastered techniques that are useful, they can productively hold that expectation.

When did EY demonstrate that he's a master, and how did he supposedly do so?

Comment author: thomblake 19 May 2009 02:27:31PM 2 points [-]

Hopefully in a martial arts dojo there are stringent requirements for the demonstration of skill before someone is put in a teaching position

There really aren't, though one does need to jump through some hoops. That's part of what I like about this analogy.

Comment author: Annoyance 19 May 2009 02:48:28PM 1 point [-]

A lot of martial arts schools are more about "following the rules" and going through the motions of ritual forms than learning useful stuff.

As has been mentioned here before multiple times, many martial artists do very poorly in actual fights, because they've mastered techniques that just aren't very good. They were never designed in light of the goals and strategies that people who really want to win physical combat will do. Against brutally effective and direct techniques, they lose.

Humans like to make rituals and rules for things that have none. This is a profound weakness and vulnerability, because they also tend to lose sight of the distinction between reality and the rules they cause themselves to follow.

Comment author: MichaelVassar 19 May 2009 06:08:00PM 1 point [-]

There are no "things that have no rules". If there were, you couldn't perceive them in the first place in order to make up rules about them.

Comment author: Annoyance 19 May 2009 06:26:06PM 0 points [-]

Read that as "socially-recognized principles as to how something is to be done for things that physics permits in many different ways".

Spill the salt, you must throw some over your shoulder. Step on a crack, break your mother's back. Games and rituals. When people forget they're just games, problems arise.

Comment author: jscn 19 May 2009 07:50:39PM 0 points [-]

This tendency can be used for good, though. As long as you're aware of the weakness, why not take advantage of it? Intentional self-priming, anchoring, rituals of all kinds can be repurposed.

Comment author: Annoyance 20 May 2009 02:48:05PM -1 points [-]

Because repetition tends to reinforce things, both positive and negative.

You might be able to take advantage of a security weakness in your computer network, but if you leave it open other things will be able to take advantage of it too.

It's far better to close the hole and reduce vulnerability, even if it means losing access to short-term convenience.

Comment author: pjeby 19 May 2009 05:26:38PM -1 points [-]

There are plenty of reasons not to try any random thing that enters into our heads

...and most of those reasons are fallacious.

The opposite of every Great Truth is another great truth: yes, you need to look before you leap. But he who hesitates is lost. (Or in Richard Bandler's version, which I kind of like better, "He who hesitates... waits... and waits... and waits... and waits...")

When did EY demonstrate that he's a master, and how did he supposedly do so?

I never said he did.

Comment author: PhilGoetz 19 May 2009 03:36:59AM *  10 points [-]

He's tried, or he wouldn't have had the material to make those posts.

I appreciate your comments, and they're a good counterpoint to EY's point of view. But the fact that you need to make an assumption in order to be an effective teacher, because it's true most of the time, doesn't mean it's always true. You are making an expected-value calculation as a teacher, perhaps subconsciously:

  • If I accept that my approach doesn't work well with some people, and work with those people to try to find an approach that works for them, I will be able to effectively coach 50 people per year (or whatever).
  • If I dismiss the people whom my approach doesn't work well for as losers, and focus on the people whom my approach works well for, I'll be able to effectively coach 500 people per year.

You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.

The specific comments Eliezer has made, about people erroneously assuming that what worked for them should work for other people, were taken from real life and were, I think, also true and correct. In order to convince me that those specific examples were wrong, you would have to address those specific examples in detail and make a strong case why they were not really as he described them. I would rather see you narrow your claims to something reasonable than make these erroneous blanket denunciations, because they distract from the valuable things you have to say.

You don't need to duke it out with EY over who's the alpha teacher. :)

Comment author: pjeby 19 May 2009 04:19:19AM 8 points [-]

You are making an expected-value calculation as a teacher, perhaps subconsciously

No. I'm making the assumption that, until someone has actually tried something, they aren't in a position to say whether or not it works. Once someone has actually tried something, and it doesn't work, then I find something else for them to do. I don't give up and say, "oh, well I guess that doesn't work for you, then."

When I do a one-on-one consult, I don't charge someone until and unless they get the result we agree on as a "success" for that consultation. If I can't get the result, I don't get paid, and I'm out the time.

Do I make sure that the definition of "success" is reasonably in scope for what I can accomplish in one session? Sure. But I don't perform any sort of filtering (other than that which may occur by selection or availability bias, e.g. having both motivation and funds) to determine who I work with.

You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.

I didn't say he did, or that anybody did. What I said is that people assume they are unique and special and nothing will work for them. A LOT of people believe this, because they're under the mistaken impression that they tried 50 different things, when in fact they've been making the same mistakes, 50 different times, without ever being aware of the mistake.

The specific comments Eliezer has made, about people erroneously assuming that what worked for them should work for other people, were taken from real life and were, I think, also true and correct.

No argument there. However, when people assume that what worked for them will work for other people, they are actually mostly right.

What they are mistaken about is that 1) they're actually fully communicating what they did, and that 2) other people will be able to accurately reproduce the internal steps as well as the external and easy-to-describe ones.

So I agree at the level of the result, but I disagree about the cause. At the brain hardware level, human beings are just not that different from one another. We differ more at the software, filtering, and meta-cognitive levels, which is where the details of communication and teaching trip up the transfer of effective techniques.

In order to convince me that those specific examples were wrong,

Why would I want to? My point is only that Eliezer whining about things not working and demanding proof is counterproductive to his own goals and counter to his professed values and art. This is independent of whether he gives up or not, or whose advice or example he seeks.

I would rather see you narrow your claims to something reasonable

What claims do you mean?

Comment author: Vladimir_Nesov 19 May 2009 09:33:12AM *  8 points [-]

No. I'm making the assumption that, until someone has actually tried something, they aren't in a position to say whether or not it works.

This is a wrong assumption. The correctness of a decision to even try something directly depends on how certain you are it'll work. Don't play lotteries, don't hunt bigfoot, but commute to work risking death in a traffic accident.

Comment author: pjeby 19 May 2009 05:35:19PM 1 point [-]

The correctness of a decision to even try something directly depends on how certain you are it'll work.

...weighed against the expected cost. And for the kind of things we're talking about here, a vast number of things can be tried at relatively small cost compared to one's ultimate desired outcome, since the end result of a search is something you can then go on to use for the rest of your life.

Comment author: Vladimir_Golovin 20 May 2009 06:14:24AM *  4 points [-]

Precisely. There are self-help techniques that can be tried in minutes, even in seconds. I don't see a single reason for not allocating a fraction of one's procrastination time to trying mind hacks or anything else that might help against akrasia.

Say, if my procrastination time is 3 hours per day, I could allocate 10% of that -- 18 minutes. How long does it take to speak a sentence "I will become a syndicated cartoonist"? 10 seconds at maximum -- given 18 minutes, that's 108 repetitions!

But what if it doesn't work? Oh noes, I could kill 108 orcs during that time and perhaps get some green drops!

Comment author: pjeby 20 May 2009 06:20:53AM 0 points [-]

Say, if my procrastination time is 3 hours per day, I could allocate 10% of that -- 18 minutes. How long does it take to speak a sentence "I will become a syndicated cartoonist"? 10 seconds at maximum -- which means 108 repetitions into 18 minutes!

IAWYC, but if you want to learn to do it correctly, you'd be better off using fewer repetitions and suggesting something aimed at provoking an immediate response, such as "I'm now drawing a cartoon"... and carefully paying attention to your inner imagery and physical responses, which are the real meat of this family of techniques.

Comment author: Vladimir_Golovin 20 May 2009 06:34:34AM *  2 points [-]

PJ, I think that discussing details of particular mindhacks is off-topic for this thread -- let's discuss them here. That was just an example. (As for myself, I use an "I want" format, I don't repeat it anywhere near 108 times, and I do aim at immediate things.)

Comment author: Vladimir_Nesov 20 May 2009 01:06:05PM *  0 points [-]

Vladimir, it doesn't matter that a lottery ticket costs only 1 cent. Doesn't matter at all. It only matters that you don't expect to win by buying it.

Or maybe you do expect to win from a deal by investing 1 cent, or $10000, in which case by all means do so.

Comment author: Vladimir_Golovin 20 May 2009 01:19:50PM *  0 points [-]

If I were to choose between throwing one cent away and buying a lottery ticket on it, I'd buy the ticket. (I don't consider here additional expenses such as the calories I need to spend on contracting my muscles to reach the ticket stand etc. I assume that both acts -- throwing away and buying the ticket -- have zero additional costs, and the lottery has a non-zero chance of winning.)

Comment author: Vladimir_Nesov 20 May 2009 01:47:08PM *  1 point [-]

The activity of trying the procrastination tricks must be shown to be at least as good as the procrastination activity, which would be a tremendous achievement, placing these tricks far above their current standing.

You are not doing the procrastination-time activity because it's the best thing you could do, that's the whole problem with akrasia. If you find any way of replacing procrastination activity with a better procrastination activity, you are making a step away from procrastination, towards productivity.

So, you consider trying anti-procrastination tricks instead of procrastinating an improvement. But the truth of this statement is far from obvious, and it's outright false for at least my kind of procrastination. (I often procrastinate by educating myself, instead of getting things done.)

Comment author: Vladimir_Golovin 20 May 2009 02:07:48PM *  0 points [-]

Yep, my example with orcs vs. tricks was a degenerate case -- it breaks down if the procrastination activity has at least some usefulness, which is certainly the case with self-education as a procrastination activity.

But this whole area is a fertile ground for self-rationalization. In my own case, it seems more productive to simply deem certain procrastination activities as having zero benefit than to actually try to assess their potential benefits compared to other activities.

(BTW, my primary procrastination activity, PC games, is responsible for my knowledge of the English language, which I consider an enormous benefit. Who knew.)

Comment deleted 19 May 2009 07:23:06AM [-]
Comment author: pjeby 19 May 2009 05:14:49PM 0 points [-]

Hypnotic responsiveness as can be measured by the stanford test

If you mean the Hilgard scale, ask a few professional hypnotists how useful it actually is. Properly-trained hypnotists don't use a tape-recorded monotone with identical words for every person; they adjust their pace, tone, and verbiage based on observing a person's response in progress, to maximize the response. So unless th Stanford test is something like timing how long a master hypnotist takes to produce some specified hypnotic phenomena, it's probably not very useful.

Professional hypnotists also know that responsiveness is a learned process (see also the concept of "fractionation"), which means it's probably a mistake to treat it as an intrinsic variable for measuring purposes, unless you have a way to control for the amount of learning someone has done.

So, as far as this particular variable is concerned, you're observing the wrong evidence.

Personal development is an area where science routinely barks up the wrong tree, because there's a difference between "objective" measurement and maximizing utility. Even if it's a fact that people differ, operating as if that fact were true leads to less utility for everyone who doesn't already believe they're great at something.

Comment deleted 20 May 2009 03:05:33AM *  [-]
Comment author: pjeby 20 May 2009 03:49:52AM -1 points [-]

Professional scientists studying hypnosis observe that specific training can alter the hypnotic responsiveness from low to high in as much as 50% of cases.

Indeed. What's particularly important if you're after results, rather than theories, is that just because those other 50% didn't go from low to high, doesn't mean that there wasn't some different form, approach, environment, or method of training that wouldn't have produced the same result!

IOW, if the training they tested was 100% identical for each person, then the odds that the other 50% were still trainable is extremely high.

(And since most generative (as opposed to therapeutic) self-help techniques implicitly rely on the same brain functions that are used in hypnosis (monoidealistic imagination and ideomotor or ideosensory responses), this means that the same things can be made to work for everyone, provided you can train the basic skill.)

I have become convinced over time that there is a far greater heritability component than I would have liked.

Robert Fritz once wrote something about how if you're 5'3" you're not going to be able to win the NBA dunking contest... and then somebody did just that. It ain't what you've got, it's what you do with what you have got.

(Disclaimer: I don't remember the winner's name or even if 5'3" was the actual height.)

It's also rare that any quality we're born with is all bad or all good; what gives with one hand takes away with the other, and vice versa. The catch is to find the way that works for you.

Some of my students work better with images, some with sounds, others still with feelings. Some have to write things down, I like to talk things out. These are all really superficial differences, because the steps in the processes are still basically the same. Also, even though my wife is more "auditory" than I am, and doesn't visualize as well consciously... that doesn't mean she can't. (Over the last few years, she's gradually gotten better at doing processes that involve more visual elements.)

(Also, we've actually tried swapping around our usual modes of cognition for a day or two, which was interesting. When she took on my processing stack, we got along better, but when I took on hers, I was really stressed and depressed... but I had a lot more sympathy for some of her moods after that!)

On the positive side, the importance of 'natural talent' in aquiring expert skills is one area where the genetic component tends to be overestimated most of the time. When it comes to aquiring specialised skills, consistent effortful practice makes all the difference and natural talent is almost irrelevant.

Absolutely! Dweck's fixed and growth mindsets are absolutely central to my work. I used to call them "naturally struggling" and "naturally successful" -- well, I still do for marketing reasons. But Dweck showed with brilliant clarity where the mindsets come from: struggle results from believing that your ability in any area is a fixed quantity, rather than a variable one under your personal control.

If somebody wants a scientifically validated reason to believe what I'm saying in this thread, they need look no further than Dweck's mindsets research. It offers compelling scientific verification of the idea that thinking your ability is fixed really IS "dumbass loser" thinking!

Comment author: Eliezer_Yudkowsky 20 May 2009 08:41:00AM 9 points [-]

Indeed. What's particularly important if you're after results, rather than theories, is that just because those other 50% didn't go from low to high, doesn't mean that there wasn't some different form, approach, environment, or method of training that wouldn't have produced the same result!

Um... PJ, this is just what psychoanalysts said... and kept on saying after around a thousand studies showed that psychoanalysis had no effect statistically distinguishable from just talking to a random intelligent caring listener.

You need to read more basic rationality material, along the lines of Robyn Dawes's "Rational Choice in an Uncertain World". There you will find the records of many who engaged in this classic error mode and embarrassed themselves accordingly. You do not get to just flush controlled experiments down the toilet by hoping, without actually pointing to any countering studies, that someone could have done something differently that would have produced the effect you want the study to produce but that it didn't produce.

You know how there are a lot of self-indulgent bad habits you train your clients to get rid of? This is the sort of thing that master rationalists like Robyn Dawes train people to stop doing. And you are missing a lot of the basic training here, which is why, as I keep saying, it is such a tragedy that you only began to study rationality after already forming your theories of akrasia. So either you'll read more books on rationality and learn those basics and rethink those theories, or you'll stay stuck.

Comment author: pjeby 20 May 2009 04:58:04PM 1 point [-]

Um... PJ, this is just what psychoanalysts said... and kept on saying after around a thousand studies showed that psychoanalysis had no effect statistically distinguishable from just talking to a random intelligent caring listener.

Rounding to the nearest cliche. I didn't say my methods would help those other people, or that some ONE method would. I said that given a person Y there would be SOME method X. This is not at all the same thing as what you're talking about.

You do not get to just flush controlled experiments down the toilet by hoping, without actually pointing to any countering studies, that someone could have done something differently that would have produced the effect you want the study to produce but that it didn't produce.

What I've said is that if you have a standard training method that moves 50% of people from low to high on some criterion, there is an extremely high probability that the other 50% needed something different in their training. I'm puzzled how that is even remotely a controversial statement.

Comment deleted 21 May 2009 01:03:06AM [-]
Comment deleted 20 May 2009 05:23:04AM *  [-]
Comment author: pjeby 21 May 2009 03:14:59AM 2 points [-]

But I wonder, have you observed that there are some people who naturally tend to be more interested in getting involved actively in personal development efforts of the kind you support?

Yes and no. What I've observed is that most everybody wants something out of life, and if they're not getting it, then sooner or later their path leads to them trying to develop themselves, or causing themselves to accidentally get some personal development as a side effect of whatever their real goal is.

The people who set out for personal development for its own sake -- whether because they think being better is awesome or because they hate who they currently are -- are indeed a minority.

A not-insignificant-subset of my clientele are entrepreneurs and creative types who come to me because they're putting off starting their business, writing their book, or doing some other important-to-them project. And a significant number of them cease to be my customers the moment they've got the immediate problem taken care of.

So, it's not that people aren't generally motivated to improve themselves, so much as they're not motivated to make general improvements; they are after specific improvements that are often highly context-specific.

Comment deleted 20 May 2009 04:48:43AM *  [-]
Comment author: pjeby 21 May 2009 03:22:22AM 0 points [-]

I reject the previous assertion that differences between individuals are predominantly software rather than hardware.

I think we may agree more than you think. I agree that individuals are different in terms of whatever dial settings they may have when they show up at my door. I disagree that those initial dial settings are welded in place and not changeable.

"Hardware" and "software" are squishy terms when it comes to brains that can not only learn, but literally grow. And ISTM that most homeostatic systems in the body can be trained to have a different "setting" than they come from the factory with.

Comment author: PhilGoetz 19 May 2009 10:35:17PM 1 point [-]

I would rather see you narrow your claims to something reasonable

What claims do you mean?

The gist of your top-level comment here is that your techniques work for everyone; and if they don't work for someone, it's that person's fault.

Comment author: pjeby 20 May 2009 12:20:21AM *  3 points [-]

The gist of your top-level comment here is that your techniques work for everyone; and if they don't work for someone, it's that person's fault.

Here's the problem: when someone argues that some techniques might not work for some people, their objective is not merely to achieve epistemic accuracy.

Instead, the real point of arguing such a thing is a form of self-handicapping. "Bruce" is saying, "not everything works for everyone... therefore, what you have might not work for me... therefore, I don't have to risk trying and failing."

In other words, the point of saying that not every technique works for everyone is to apply the Fallacy of Grey: not everything works for everybody, therefore all techniques are alike, therefore you cannot compare my performance to anyone else, because maybe your technique just won't work for me. Therefore, I am safe from your judgment.

This is a fully general argument against trying ANY technique, for ANY purpose. It has ZERO to do with who came up with the technique or who's suggesting it; it's just a Litany Against Fear... of failure.

As a rationalist and empiricist, I want to admit the possibility that I could be wrong. However, as an instrumentalist, instructor, and helper-of-people, I'm going to say that, if you allow your logic to excuse your losing, you fail logic, you fail rationality, and you fail life.

So no, I won't be "reasonable", because that would be a failure of rationality. I do not claim that any technique X will always work for all persons; I merely claim that, given a person Y, there is always some technique X that will produce a behavior change.

The point is not to argue that a particular value of X may not work with a particular value of Y, the point is to find X.

(And the search space for X, seen from the "inside view", is about two orders of magnitude smaller than it appears to be from the "outside view".)

Comment author: loqi 20 May 2009 04:03:58AM 4 points [-]

Instead, the real point of arguing such a thing is a form of self-handicapping. "Bruce" is saying, "not everything works for everyone... therefore, what you have might not work for me... therefore, I don't have to risk trying and failing."

I'm pretty surprised to see you make this type of argument. Are you really so sure that you have that precise of an understanding of the motives behind everyone who has brought this up? You seem oblivious to the predictable consequences of acting so unreasonably confident in your own theories. Your style alone provokes skepticism, however unwarranted or irrational it may be. Seeing you write this entire line of criticism off as "they're just Brucing" makes me wonder just how much your brand of "instrumental" rationality interferes with your perception of reality.

Comment author: Eliezer_Yudkowsky 20 May 2009 08:46:58AM 10 points [-]

Seconded.

Here's the problem: when someone argues that some techniques might not work for some people, their objective is not merely to achieve epistemic accuracy. Instead, the real point of arguing such a thing is a form of self-handicapping.

Because of course it is impossible a priori that any technique works for one person but not another. Furthermore, it is impossible for anyone to arrive at this conclusion by an honest mistake. They all have impure motives; furthermore they all have the same particular impure motive; furthermore P. J. Eby knows this by virtue of his vast case experience, in which he has encountered many people making this assertion, and deduced the same impure motive every time.

To quote Karl Popper:

The Freudian analysts emphasized that their theories were constantly verified by their "clinical observations." As for Adler, I was much impressed by a personal experience. Once, in 1919, I reported to him a case which to me did not seem particularly Adlerian, but which he found no difficulty in analyzing in terms of his theory of inferiority feelings, Although he had not even seen the child. Slightly shocked, I asked him how he could be so sure. "Because of my thousandfold experience," he replied; whereupon I could not help saying: "And with this new case, I suppose, your experience has become thousand-and-one-fold."

I'll say it again. PJ, you need to learn the basics of rationality - in this you are an apprentice and you are making apprentice mistakes. You will either accept this or learn the basics, or not. That's what you would tell a client, I expect, if they were making mistakes this basic according to your understanding of akrasia.

Comment author: Emile 21 May 2009 07:15:43PM 1 point [-]

Heh, that Adler anecdote reminds me of a guy I know who tends to believe in conspiracy theories, and who was backing up his belief that the US government is behind 9-11 by saying how evil the US government tends to be. Of course, 9-11 will most likely serve as future evidence of how evil the US government is.

(Not that I can tell whether that's what's going on here)

Comment author: pjeby 20 May 2009 04:29:28AM -1 points [-]

Are you really so sure that you have that precise of an understanding of the motives behind everyone who has brought this up?

What makes you think I'm writing to the motives of specific people? If I were, I'd have named names (as I named Eliezer).

In the post you were quoting, I was speaking in the abstract, about a particular fallacy, not attributing that fallacy to any particular persons.

So if you don't think what I said applies to you, why are you inquiring about it?

(Note: reviewing the comment in question, I see that I might not have adequately qualified "someone ... who argues" -- I meant, someone who argues insistently, not someone who merely "argues" in the sense of, "puts forth reasoning". I can see how that might have been confusing.)

You seem oblivious to the predictable consequences of acting so unreasonably confident in your own theories.

No, I'm well aware of those consequences. The natural consequence of confidently stating ANY opinion is to have some people agree and some disagree, with increased emotional response by both groups, compared to a less-confident statement. Happens here all the time. Doesn't have anything to do with the content, just the confidence.

Seeing you write this entire line of criticism off as "they're just Brucing" makes me wonder just how much your brand of "instrumental" rationality interferes with your perception of reality.

I wrote what I wrote because some of the people here who are Brucing via "epistemic" arguments will see themselves in my words, and maybe learn something.

But if I water down my words to avoid offense to those who are not Brucing (or who are, but don't want to think about it) I lessen the clarity of my communication to precisely the group of people I can help by saying something in the first place.

Comment deleted 21 May 2009 01:05:26AM [-]
Comment author: pjeby 21 May 2009 02:41:44AM 1 point [-]

You can be assured that 'Bruce' will take blatant fallacies or false claims as an excuse to ignore you

And if there aren't any, he'll be sure to invent them. ;-)

Perhaps they may respond better to a more consistently rational approach.

Hehehehe. Sure, because subconscious minds are so very rational. Right.

Conscious minds are reasonable, and occasionally rational... but they aren't, as a general rule, in charge of anything important in a person's behavior. (Although they do love to take credit for everything, anyway.)

Comment author: Nick_Tarleton 21 May 2009 04:25:06AM 0 points [-]

And if there aren't any, he'll be sure to invent them. ;-)

No reason to make his job easier.

Hehehehe. Sure, because subconscious minds are so very rational. Right.

No, but personally, mine is definitely sufficiently capable of noticing minor logical flaws to use them to irrationally dismiss uncomfortable arguments. This may be rare, but it happens.

Comment author: matt 19 May 2009 08:09:18AM *  3 points [-]

it's that ANY objection that stops you from actually trying something useful, means you fail. You lose. You are not being a smart, rational skeptic, you're being a dumbass loser.

So, you still need to know what's likely to be useful. You can waste a lot of time trying stuff that just isn't going to work.

(And, just in case it wasn't clear - I am a long (long long) way from the belief that Eliezer is "a dumbass loser" (which you don't quite say, but it's a confusion I'd like to avoid).)

Comment author: JamesCole 19 May 2009 08:29:50AM 2 points [-]

I'd also add:

  • there's heaps of stuff that's 'useful'. what matters is how useful it is - especially in relation to things that might be more useful. we all have limited time and (other) resources. it's a cost/benefit ratio. the good is the enemy of the great, and all that.

  • often it's unclear how useful something really is, you have to take this into account when you judge whether it's worth your while. and you also have to make a judgement about whether it's even worth your while to try evaluating it... coz there's always heaps and heaps of options and you can't spend your time evaluating them all.

Comment author: pjeby 19 May 2009 05:23:11PM 1 point [-]

You can waste a lot of time trying stuff that just isn't going to work.

Either you have something better to do with your time or you don't.

If you don't have something better, then it's not a waste of time.

If you do have something better to do, but you're spending your time bitching about it instead of working on it, then trying even ludicrous things is still a better use of your time.

IMO, the real waste of time is when people spend all their time making up explanations to excuse their self-created limitations.

Comment author: hrishimittal 19 May 2009 12:35:18PM 3 points [-]

If the master sat there listening to people's inane theories about how they need to punch differently than everybody else, or their insistence that they really need to understand a complete theory of combat, complete with statistical validation against a control group, before they can even raise a single fist in practice, that master would have failed their students AND their Art.

Even so, as a student, I do want the master to understand a complete theory of combat, complete with statistical validation against a control group.

What is your theory o Master?

Comment author: pjeby 19 May 2009 05:41:20PM 0 points [-]

Even so, as a student, I do want the master to understand a complete theory of combat, complete with statistical validation against a control group.

Understanding something doesn't necessarily mean you can explain it. And explaining something doesn't necessarily mean anyone can understand it.

Can you explain how to ride a bicycle? Can you learn to ride a bicycle using only an explanation?

The theory of bicycle riding is not the practice of how to ride a bicycle.

What is your theory o Master?

Someone else's understanding is not a substitute for your experience. That's my only "theory", and I find it works pretty well in "practice". ;-)

Comment deleted 20 May 2009 09:05:42AM [-]
Comment author: pjeby 20 May 2009 05:15:49PM *  0 points [-]

Can you explain how to ride a bicycle? Yes. Can you learn to ride a bicycle using only an explanation? Yes.

By only an explanation, I mean without practice, and without ever having seen someone ride one.

And by "explain how to ride a bicycle", I mean, "provide an explanation that would allow someone to learn to ride, without any other information or practice."

Oh, and by the way, you only get to communicate one way in your explanation or being the explainee. No questions, no feedback, no correcting mistakes.

I thought these things would've been clear in context, since we were contrasting the teaching of martial arts (live feedback and practice) with the teaching of self-help (in one-way textual form).

People expect to be able to learn to do a self-help technique in a single trial from a one-way explanation, perhaps because our brains are biased to assume they can already do anything a brain "ought to" be able to do "naturally".

Comment author: AdeleneDawner 22 May 2009 01:45:59PM 2 points [-]

Wow, I came late to this party.

One takeaway here is, don't reduce your search space to zero if you can help it. If that means that you have to try things without substantial evidence that they'll work, well, it's that or lose, and we're not supposed to lose.

I can think of a few situations where it'd make sense to reduce your search space to zero pending more data, though. The general rule for that seems to be that if you do allow that to happen, whatever reason you have for allowing that to happen is more important to you than the goal you're giving up by not looking for solutions. In situations where you're choosing not to look for solutions to avoid danger, as an example, that makes sense, or if trying the solutions would mean taking resources away from other projects that were also important.

Comment author: michaelsullivan 19 May 2009 10:36:56PM 2 points [-]

On your reaction to "a way to reject the placebo effect", it's important to distinguish what we are trying to do. If all I care about is fixing a given problem for myself, I don't care whether I solve it by placebo effect or by a repeatable hack.

If I care about figuring out how my brain works, then I will need a way to reject or identify the placebo effect.

Comment author: billswift 20 May 2009 12:37:39AM 2 points [-]

You also need to avoid placebo effects if you want the hack to be repeatable (if you run into a similar problem again), generalizable (to work on a wider class of problems), or reliable.

Comment author: pjeby 20 May 2009 12:39:33AM 0 points [-]

If all I care about is fixing a given problem for myself, I don't care whether I solve it by placebo effect or by a repeatable hack.

Actually, it is important to separate certain kinds of placebo effects. The reason I use somatic marker testing in my work is to replace vague "I think I feel better"'s with "Ah! I'm responding differently to that stimulus now"'s.

Technically, "I think I feel better" isn't really a placebo effect; it's just vagueness and confusion. The "real" placebo effect is just acting as if a certain premise were true (e.g. "this pill will make me better").

In that sense, affirmations, LoA, and hypnosis are explicit applications of the same principle, in that they attempt to set up the relevant expectation(s) directly.

Similarly, Eliezer's "count to 10 and get up" trick is also a "placebo effect", in that it operates by setting up the expectation that, "after I count to 10, I'm going to get up".

Comment deleted 21 May 2009 01:52:43AM [-]
Comment author: pjeby 21 May 2009 02:17:44AM 0 points [-]

An fMRI will tell you something different.

Really? There's a study where they compared those three things? And they controlled for whether the participants were actually any good at producing results with affirmations or LoA? If so, I'd love to read it.

No it isn't.

How do you figure that?

Comment author: SoullessAutomaton 19 May 2009 10:40:48PM 1 point [-]

There's also the question of to what extent the placebo effect is actually meaningful when "causing effects in the mind" is the goal.

Comment author: stcredzero 19 May 2009 05:29:08PM 2 points [-]

I am wondering, what are the good reasons for a rationalist to lose?

Comment author: steven0461 19 May 2009 05:48:56PM *  4 points [-]
  • bad luck
  • if it's impossible to win (in that case, just lose less; a semantic difference)
  • if "winning" is defined as something else than achieving what you truly value

That's all of them, I think.

ETA: more in the context of this post, a good reason to lose at some subgoal is if winning at the subgoal can be done only at the cost of losing too much elsewhere.

Comment author: billswift 20 May 2009 12:47:23AM 1 point [-]

Another is failure of knowledge. It's possible simply not to know something you need to succeed, at the time you need it. No one can know everything they might possibly need to. It is not irrational, if you did not know that you would need to know beforehand.

Comment author: Vladimir_Nesov 19 May 2009 05:58:10PM *  -1 points [-]

I exclude bad luck from this list, since winning might as well be defined over counterfactual worlds. If you lose in your real world, you can still figure out how well you'd do in the counterfactuals.

Comment author: Alicorn 19 May 2009 05:34:57PM 2 points [-]

Well-chosen risks turning out badly?

Comment author: bentarm 20 May 2009 01:39:11AM 0 points [-]

I'll give you odds of 2:1 against that this coin will come up heads...

Comment author: MendelSchmiedekamp 19 May 2009 05:10:09PM *  2 points [-]

The approach laid out in this post is likely to be effective if, your predominant goal is to find a collection of better performing akrasia and willpower hacks.

If, however, finding such hacks is only a possible intermediate goal, then different conclusions can be reached. This is even more telling if improved willpower and akrasia resistance is your intermediate goal - regardless of whether you choose hacks or some other method for realizing it.

Another bad reason for rationalists to lose is to try to win every contest placed in front of them. Choosing your battles is the same as choosing your strategies, just at a higher scale.

Comment author: Vladimir_Nesov 19 May 2009 09:12:12AM *  5 points [-]

When you spend time trying out the 1000 popular hacks doing you no good, then you lose. You lose all the time and energy invested in the enterprise, for which you could find a better use.

How do you know anything works, before even thinking about what in particular to try out? How much thought, and how much work is it reasonable to use for investigating a possibility? Intuition, and evidence. Self-help folk notoriously don't give evidence for efficacy of their procedures, which in itself looks like evidence of absence of this efficacy, a reason to believe that you'll only waste time going through the motions. My intuition agrees.

A deep theory is both a tool for constructing unusually powerful techniques, and a way to signal a nontrivial probability of viability of the techniques even prior to experimental testing.

Comment author: pjeby 19 May 2009 05:55:27PM 4 points [-]

Self-help folk notoriously don't give evidence for efficacy of their procedures

Anecdotal evidence is still evidence.

Note that one of EY's rationality principles is that if you apply arguments selectively, then the smarter you get, the stupider you become.

So, the reason I am referring to this cross-pollination of epistemic standards to an instrumental field as being "dumbass loser" thinking, is because as Richard Bach once put it, "if you argue for your limitations, then sure enough, you get to keep them."

If you require that the "useful" first be "true", then you will never be the one who actually changes anything. At best, you can only be the person who does an experiment to find the "true" in the already-useful... which will already have been adopted by those who were looking for "useful" first.

Comment author: Annoyance 19 May 2009 02:02:49PM 0 points [-]

It IS important to note individual variation. If someone has a fever that's easily cured by a specific drug, but they tell you that they have a rare, fatal allergy to that medication, you don't give the drug to them anyway on the grounds that it's "unlikely" it'll kill them.

Similarly, if a particular drug is known not to have the 'normal' effect in a patient, you don't keep giving it to them in hopes that their bodies will suddenly begin acting differently.

The key is to distinguish between genuine feedback of failure, and rationalization. THIS POINT IS NOT ADDRESSED ENOUGH HERE. There are simple and effective means of identifying the difference between rationality and rationalization, but they are not discussed, they are not applied, and frankly they don't even seem to be known here at LW.

Comment author: zaph 19 May 2009 02:12:33PM 4 points [-]

Perhaps you could write an article discussing the ways the differences between rationality and rationalization can be identified? I for one would find it useful. I find myself using rationalizations that mask themselves as rationality (often too late), and it would help me to do that less.

Comment author: conchis 19 May 2009 02:12:12PM *  3 points [-]

There are simple and effective means of identifying the difference between rationality and rationalization, but they are not discussed, they are not applied, and frankly they don't even seem to be known here at LW.

So enlighten us (please).

EDIT: For the avoidance of doubt, this is not intended as sarcasm.

Comment author: PhilGoetz 19 May 2009 03:47:50AM 1 point [-]

The upvotes / comment ratio here is remarkably high. What does that mean?

Comment author: SilasBarta 19 May 2009 04:56:57PM 4 points [-]

Well, it looks like I'm an extreme outlier on this one, because I actually voted it down because I thought it got a lot wrong, and for bad reasons.

First of all, despite criticizing EY for "needing" things that would merely be supercool, matt lists a large number of things that would also be merely supercool: it just doesn't seem like you need all of those chance values either.

Second, matt seemed to miss why EY was asking for all of that information: that presenting a "neato trick" that happens to work, provides very little information as to why it works, and when it should be used, etc. EY had explained that he personally went through such an experience and described what is lacking when you don't provide the information he asked for.

In short, EY provided very good reasons why he should be skeptical of just trying every neato trick, matt said very little that was responsive to his points.

Comment author: matt 19 May 2009 10:15:36PM 2 points [-]

despite criticizing EY for "needing" things that would merely be supercool, matt lists a large number of things that would also be merely supercool

Yah, good point - those are meant to be discussion points, but that's not really very clear as written. I don't mean to imply that we need everything in the lists, but to characterize the sort of thing we should be looking for.

Second, matt seemed to miss why EY was asking for all of that information

No, I don't think that's right. Eliezer is presenting as needful lots of stuff that he's just not going to get. That seems to be leading him not to try anything until he finds something that passes through his very tight filter. I'm claiming that the relevant filter should be built on expected utility, and that there is pretty good information available (most of the stuff in the lists can at least be estimated with little time invested) that would lead him to try more hacks than the none likely to pass his filter.

EY provided very good reasons why he should be skeptical of just trying every neato trick

I'm very not suggesting that you should try "every neato trick". I am suggesting that high expected utility is a better filter than robust scientific research. If you have robust research available you should use it. When you don't, have a look through my lists and see whether it's worth trying something anyway. You might manage a win.

Comment author: Alicorn 19 May 2009 04:19:10AM 4 points [-]

Maybe it means the post was upvoted for agreement, and people don't have much to add, and don't want to just say "yay! good post!"?

Comment author: MichaelBishop 19 May 2009 05:07:09PM 1 point [-]

Could there be a connection to the recent slowing of the rate of new posts to LW?

Comment author: haig 20 May 2009 07:01:35AM 0 points [-]

Shouldn't this be in the domain of psychological research? The positive psychology movement seems to have a large momentum and many young researchers are pursuing a lot of lines of questioning in these areas. If you really want rigorous, empirically verified, general purpose theory, that seems to be the best bet.