Reply to: Practical Advice Backed By Deep Theories

Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.

Eliezer has suggested that, before he will try a new anti-akraisia brain hack:

[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.

This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.

I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.

So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?

  • We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
  • We need some likelihood estimates:
    • Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
    • Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
    • Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
    • Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
      (can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
    • Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
    • Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
    • Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up.  And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
    • What else do we need to know?
  • We need some time/cost estimates (these will vary greatly by proposed brain hack):
    • Time required to stage a personal experiment on the hack: ?
    • Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
    • What else do we need?

… and, what don't we need?

  • A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.


How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?

New Comment
83 comments, sorted by Click to highlight new comments since: Today at 10:36 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

It might be worth separating the claim "Eliezer is wrong about what changes he, personally, should try" from the claim

"It is generally good to try many plausible changes, because:

  1. Some portion will work;
  2. Trying the number of approaches it takes to find an improvement is often less expensive than being stuck in the wrong local optimum;
  3. Many of us humans tend to keep on doing the same old thing because it's easy, comfortable, safe-feeling, or automatic, even when sticking with our routines is not the high-expected-value thing to do. We can benefit from adopting heuristics of action and experimentation to check such tendencies.”

The second claim seems fairly clearly right, at least for some of us. (People may vary in how easily they can try on new approaches, and on what portion of handed-down approaches work for them. OTOH, the ability to easily try new approaches is itself learnable, at least for many of us.) The first claim is considerably less clear, particularly since Eliezer has much data on himself that we do not, and since after trying many hacks for a given not-lightcone-destroying problem without any of the hacks working, expected value calculations can in fact point to directing one’s efforts elsewhere.

Maybe we could abandon Eliezer’s specific case, and try to get into the details of: (a) how to benefit from trying new approaches; and (b) what rules of thumb for what to try, and what to leave alone, yield high expected life-success?

1tut15y
One more reason for the list is that doing new stuff (or doing stuff in new ways, but I repeat myself) promotes neurogenesis.

Awesomely summarized, so much so that I don't know what else to say, except to perhaps offer this complementary anecdote.

Yesterday, I was giving a workshop on what I jokingly call "The Jedi Mind Trick" -- really the set of principles that makes monoidealism techniques (such as "count to 10 and do it") either work or not work. Towards the end, a woman in the group was having some difficulty applying it, and I offered to walk through an example with her.

She picked the task of organizing some files, and I explained to her what to say and picture in her mind, and asked, "What comes up in your mind right now?"

And she said, "well, I'm on a phone call, I can't organize them right now." And I said "Right, that's standard objection #1 - "I'm doing something else". So now do it again..." [I repeated the instructions]. "What comes to mind?"

She says, "Well, it's that it'll be time to do it later".

"Standard objection #2: it's not time right now, or I don't have enough time. Great. We're moving right along. Do it again. What comes to mind?"

"Well, now I'm starting to see more of what I'd actuall... (read more)

He's tried, or he wouldn't have had the material to make those posts.

I appreciate your comments, and they're a good counterpoint to EY's point of view. But the fact that you need to make an assumption in order to be an effective teacher, because it's true most of the time, doesn't mean it's always true. You are making an expected-value calculation as a teacher, perhaps subconsciously:

  • If I accept that my approach doesn't work well with some people, and work with those people to try to find an approach that works for them, I will be able to effectively coach 50 people per year (or whatever).
  • If I dismiss the people whom my approach doesn't work well for as losers, and focus on the people whom my approach works well for, I'll be able to effectively coach 500 people per year.

You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.

The specific comments Eliezer has made, about people erroneously assuming that what worked for them should work for other people, were taken from real life and were, I think, also true and ... (read more)

You are making an expected-value calculation as a teacher, perhaps subconsciously

No. I'm making the assumption that, until someone has actually tried something, they aren't in a position to say whether or not it works. Once someone has actually tried something, and it doesn't work, then I find something else for them to do. I don't give up and say, "oh, well I guess that doesn't work for you, then."

When I do a one-on-one consult, I don't charge someone until and unless they get the result we agree on as a "success" for that consultation. If I can't get the result, I don't get paid, and I'm out the time.

Do I make sure that the definition of "success" is reasonably in scope for what I can accomplish in one session? Sure. But I don't perform any sort of filtering (other than that which may occur by selection or availability bias, e.g. having both motivation and funds) to determine who I work with.

You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.

I didn't say he did, or tha... (read more)

8Vladimir_Nesov15y
This is a wrong assumption. The correctness of a decision to even try something directly depends on how certain you are it'll work. Don't play lotteries, don't hunt bigfoot, but commute to work risking death in a traffic accident.
1pjeby15y
...weighed against the expected cost. And for the kind of things we're talking about here, a vast number of things can be tried at relatively small cost compared to one's ultimate desired outcome, since the end result of a search is something you can then go on to use for the rest of your life.
4Vladimir_Golovin15y
Precisely. There are self-help techniques that can be tried in minutes, even in seconds. I don't see a single reason for not allocating a fraction of one's procrastination time to trying mind hacks or anything else that might help against akrasia. Say, if my procrastination time is 3 hours per day, I could allocate 10% of that -- 18 minutes. How long does it take to speak a sentence "I will become a syndicated cartoonist"? 10 seconds at maximum -- given 18 minutes, that's 108 repetitions! But what if it doesn't work? Oh noes, I could kill 108 orcs during that time and perhaps get some green drops!
0Vladimir_Nesov15y
Vladimir, it doesn't matter that a lottery ticket costs only 1 cent. Doesn't matter at all. It only matters that you don't expect to win by buying it. Or maybe you do expect to win from a deal by investing 1 cent, or $10000, in which case by all means do so.
0Vladimir_Golovin15y
If I were to choose between throwing one cent away and buying a lottery ticket on it, I'd buy the ticket. (I don't consider here additional expenses such as the calories I need to spend on contracting my muscles to reach the ticket stand etc. I assume that both acts -- throwing away and buying the ticket -- have zero additional costs, and the lottery has a non-zero chance of winning.)
1Vladimir_Nesov15y
The activity of trying the procrastination tricks must be shown to be at least as good as the procrastination activity, which would be a tremendous achievement, placing these tricks far above their current standing. You are not doing the procrastination-time activity because it's the best thing you could do, that's the whole problem with akrasia. If you find any way of replacing procrastination activity with a better procrastination activity, you are making a step away from procrastination, towards productivity. So, you consider trying anti-procrastination tricks instead of procrastinating an improvement. But the truth of this statement is far from obvious, and it's outright false for at least my kind of procrastination. (I often procrastinate by educating myself, instead of getting things done.)
0Vladimir_Golovin15y
Yep, my example with orcs vs. tricks was a degenerate case -- it breaks down if the procrastination activity has at least some usefulness, which is certainly the case with self-education as a procrastination activity. But this whole area is a fertile ground for self-rationalization. In my own case, it seems more productive to simply deem certain procrastination activities as having zero benefit than to actually try to assess their potential benefits compared to other activities. (BTW, my primary procrastination activity, PC games, is responsible for my knowledge of the English language, which I consider an enormous benefit. Who knew.)
0pjeby15y
IAWYC, but if you want to learn to do it correctly, you'd be better off using fewer repetitions and suggesting something aimed at provoking an immediate response, such as "I'm now drawing a cartoon"... and carefully paying attention to your inner imagery and physical responses, which are the real meat of this family of techniques.
2Vladimir_Golovin15y
PJ, I think that discussing details of particular mindhacks is off-topic for this thread -- let's discuss them here. That was just an example. (As for myself, I use an "I want" format, I don't repeat it anywhere near 108 times, and I do aim at immediate things.)
6[anonymous]15y
That claim does not match the evidence that I have encountered. Consider, for example, responsiveness to hypnosis. Hypnotic responsiveness as can be measured by the stanford test is found to differ more between fraternal twins raised together than between identical twins raised apart. It also seems to be related to the size of the rostrum region of the corpus callosum. I agree that people tend to overestimate their own uniqueness and I know this is something that I do myself. Nevertheless, there is clearly one element of human behavior and motivation that is attributable directly to the brain hardware level and I suggest that there are many more.
0pjeby15y
If you mean the Hilgard scale, ask a few professional hypnotists how useful it actually is. Properly-trained hypnotists don't use a tape-recorded monotone with identical words for every person; they adjust their pace, tone, and verbiage based on observing a person's response in progress, to maximize the response. So unless th Stanford test is something like timing how long a master hypnotist takes to produce some specified hypnotic phenomena, it's probably not very useful. Professional hypnotists also know that responsiveness is a learned process (see also the concept of "fractionation"), which means it's probably a mistake to treat it as an intrinsic variable for measuring purposes, unless you have a way to control for the amount of learning someone has done. So, as far as this particular variable is concerned, you're observing the wrong evidence. Personal development is an area where science routinely barks up the wrong tree, because there's a difference between "objective" measurement and maximizing utility. Even if it's a fact that people differ, operating as if that fact were true leads to less utility for everyone who doesn't already believe they're great at something.
4[anonymous]15y
I mean the Stanford Hypnotic Susceptibility Scales, the most useful being SHSS:C. Hilgard played his cards poorly and somehow failed to have the scale named after himself. I am more interested in the findings of researchers who study the clinical work of professional hypnotists than I am in the opinions of the hypnotists themselves. Like most commonly used psychological metrics, the SHSS:C is far from perfect. Nevertheless, it does manage to correlate strongly with the success of clinical outcomes, which is the best I can expect of it. Professional scientists studying hypnosis observe that specific training can alter the hypnotic responsiveness from low to high in as much as 50% of cases. Many have expressed surprise at just how stable the baseline is over time and observe that subjects trained to respond to hypnosis revert to the baseline over time. Nevertheless, such reversion takes time and Gosgard found (in 2004) that a training effect can remain for as much as four months. When I began researching hypnosis I was forced to subordinate my preferred belief to what the evidence suggests. When it comes to most aspects of personality and personal psychological profile I much prefer to believe in the power of 'nurture' and my ability to mould my own personality profile to my desires with training. I have become convinced over time that there is a far greater heritability component than I would have liked. On the positive side, the importance of 'natural talent' in aquiring expert skills is one area where the genetic component tends to be overestimated most of the time. When it comes to aquiring specialised skills, consistent effortful practice makes all the difference and natural talent is almost irrelevant. There is certainly something to that! I do see the merit in 'operating as if [something that may not necessarily be our best prediction of reality]'. It would be great if there were greater scientific efforts in investigating the most effective personal develop
-2pjeby15y
Indeed. What's particularly important if you're after results, rather than theories, is that just because those other 50% didn't go from low to high, doesn't mean that there wasn't some different form, approach, environment, or method of training that wouldn't have produced the same result! IOW, if the training they tested was 100% identical for each person, then the odds that the other 50% were still trainable is extremely high. (And since most generative (as opposed to therapeutic) self-help techniques implicitly rely on the same brain functions that are used in hypnosis (monoidealistic imagination and ideomotor or ideosensory responses), this means that the same things can be made to work for everyone, provided you can train the basic skill.) Robert Fritz once wrote something about how if you're 5'3" you're not going to be able to win the NBA dunking contest... and then somebody did just that. It ain't what you've got, it's what you do with what you have got. (Disclaimer: I don't remember the winner's name or even if 5'3" was the actual height.) It's also rare that any quality we're born with is all bad or all good; what gives with one hand takes away with the other, and vice versa. The catch is to find the way that works for you. Some of my students work better with images, some with sounds, others still with feelings. Some have to write things down, I like to talk things out. These are all really superficial differences, because the steps in the processes are still basically the same. Also, even though my wife is more "auditory" than I am, and doesn't visualize as well consciously... that doesn't mean she can't. (Over the last few years, she's gradually gotten better at doing processes that involve more visual elements.) (Also, we've actually tried swapping around our usual modes of cognition for a day or two, which was interesting. When she took on my processing stack, we got along better, but when I took on hers, I was really stressed and depressed...
9Eliezer Yudkowsky15y
Um... PJ, this is just what psychoanalysts said... and kept on saying after around a thousand studies showed that psychoanalysis had no effect statistically distinguishable from just talking to a random intelligent caring listener. You need to read more basic rationality material, along the lines of Robyn Dawes's "Rational Choice in an Uncertain World". There you will find the records of many who engaged in this classic error mode and embarrassed themselves accordingly. You do not get to just flush controlled experiments down the toilet by hoping, without actually pointing to any countering studies, that someone could have done something differently that would have produced the effect you want the study to produce but that it didn't produce. You know how there are a lot of self-indulgent bad habits you train your clients to get rid of? This is the sort of thing that master rationalists like Robyn Dawes train people to stop doing. And you are missing a lot of the basic training here, which is why, as I keep saying, it is such a tragedy that you only began to study rationality after already forming your theories of akrasia. So either you'll read more books on rationality and learn those basics and rethink those theories, or you'll stay stuck.
1pjeby15y
Rounding to the nearest cliche. I didn't say my methods would help those other people, or that some ONE method would. I said that given a person Y there would be SOME method X. This is not at all the same thing as what you're talking about. What I've said is that if you have a standard training method that moves 50% of people from low to high on some criterion, there is an extremely high probability that the other 50% needed something different in their training. I'm puzzled how that is even remotely a controversial statement.
-1[anonymous]15y
It is a conclusion that just doesn't follow.
1pjeby15y
You ever heard of something called the Pygmalion effect? Did the study control for it? By which I mean, did they control for the beliefs of the teachers who were training these subjects, in reference to: * the trainability and potential of the subjects themselves, and * the teachability of the subject matter itself? For example, did they tell the teacher they had a bunch of students with superb hypnotic potential who just needed some encouragement to get going, or did they tell them they were conducting a test, to see who was trainable, or if it was possible to train hypnotic ability at all? These things make a HUGE difference to whether people actually learn.
0[anonymous]15y
This is one area where rational thinking is of real benefit. Because not only is a 'growth mindset' more effective than a 'fixed mindset' when it comes to learning skills it is also simply far more accurate. While I was devouring the various therios and findings compiled in The Cambridge Handbook of Expertise and Expert Performance I kept running across one common observation. There is, it seems one predictor of expert performance in a field that has a significant heritable component. It isn't height or IQ. Although those two are highly heritible they aren't all that great at predicting successful acheivement of elite performance. As best as the researchers could desipher, the heritable component of success is more or less the ability to motivate oneself to deliberately practice for four hours seven days a week for about ten years. Now, I would be surprised to see you concede the heritability of motivation and I definitely suggest it is an area in which to apply Dweck's growth mindset at full force! You also have a whole bag of tricks and techniques that can be used to enhance just the sort of motivation required. But I wonder, have you observed that there are some people who naturally tend to be more interested in getting involved actively in personal development efforts of the kind you support? Completely aside from whether they believe in the potential usefulness, there would seem to be many who are simply less likely to care enough to take extreme personal development seriously.
2pjeby15y
Yes and no. What I've observed is that most everybody wants something out of life, and if they're not getting it, then sooner or later their path leads to them trying to develop themselves, or causing themselves to accidentally get some personal development as a side effect of whatever their real goal is. The people who set out for personal development for its own sake -- whether because they think being better is awesome or because they hate who they currently are -- are indeed a minority. A not-insignificant-subset of my clientele are entrepreneurs and creative types who come to me because they're putting off starting their business, writing their book, or doing some other important-to-them project. And a significant number of them cease to be my customers the moment they've got the immediate problem taken care of. So, it's not that people aren't generally motivated to improve themselves, so much as they're not motivated to make general improvements; they are after specific improvements that are often highly context-specific.
0[anonymous]15y
I would like to affirm the distinction between the overall mindset you wish to encourage and the specific claims that you use while doing so. For example I agree with your claims in this (immediate parent) post and also your the gist of your personal development philosophy while I reject the previous assertion that differences between individuals are predominantly software rather than hardware. (And yes, 50% was presented as a significant finding in favour of training from the baseline.)
0pjeby15y
I think we may agree more than you think. I agree that individuals are different in terms of whatever dial settings they may have when they show up at my door. I disagree that those initial dial settings are welded in place and not changeable. "Hardware" and "software" are squishy terms when it comes to brains that can not only learn, but literally grow. And ISTM that most homeostatic systems in the body can be trained to have a different "setting" than they come from the factory with.
2PhilGoetz15y
The gist of your top-level comment here is that your techniques work for everyone; and if they don't work for someone, it's that person's fault.
3pjeby15y
Here's the problem: when someone argues that some techniques might not work for some people, their objective is not merely to achieve epistemic accuracy. Instead, the real point of arguing such a thing is a form of self-handicapping. "Bruce" is saying, "not everything works for everyone... therefore, what you have might not work for me... therefore, I don't have to risk trying and failing." In other words, the point of saying that not every technique works for everyone is to apply the Fallacy of Grey: not everything works for everybody, therefore all techniques are alike, therefore you cannot compare my performance to anyone else, because maybe your technique just won't work for me. Therefore, I am safe from your judgment. This is a fully general argument against trying ANY technique, for ANY purpose. It has ZERO to do with who came up with the technique or who's suggesting it; it's just a Litany Against Fear... of failure. As a rationalist and empiricist, I want to admit the possibility that I could be wrong. However, as an instrumentalist, instructor, and helper-of-people, I'm going to say that, if you allow your logic to excuse your losing, you fail logic, you fail rationality, and you fail life. So no, I won't be "reasonable", because that would be a failure of rationality. I do not claim that any technique X will always work for all persons; I merely claim that, given a person Y, there is always some technique X that will produce a behavior change. The point is not to argue that a particular value of X may not work with a particular value of Y, the point is to find X. (And the search space for X, seen from the "inside view", is about two orders of magnitude smaller than it appears to be from the "outside view".)
4loqi15y
I'm pretty surprised to see you make this type of argument. Are you really so sure that you have that precise of an understanding of the motives behind everyone who has brought this up? You seem oblivious to the predictable consequences of acting so unreasonably confident in your own theories. Your style alone provokes skepticism, however unwarranted or irrational it may be. Seeing you write this entire line of criticism off as "they're just Brucing" makes me wonder just how much your brand of "instrumental" rationality interferes with your perception of reality.
9Eliezer Yudkowsky15y
Seconded. Because of course it is impossible a priori that any technique works for one person but not another. Furthermore, it is impossible for anyone to arrive at this conclusion by an honest mistake. They all have impure motives; furthermore they all have the same particular impure motive; furthermore P. J. Eby knows this by virtue of his vast case experience, in which he has encountered many people making this assertion, and deduced the same impure motive every time. To quote Karl Popper: I'll say it again. PJ, you need to learn the basics of rationality - in this you are an apprentice and you are making apprentice mistakes. You will either accept this or learn the basics, or not. That's what you would tell a client, I expect, if they were making mistakes this basic according to your understanding of akrasia.
1Emile15y
Heh, that Adler anecdote reminds me of a guy I know who tends to believe in conspiracy theories, and who was backing up his belief that the US government is behind 9-11 by saying how evil the US government tends to be. Of course, 9-11 will most likely serve as future evidence of how evil the US government is. (Not that I can tell whether that's what's going on here)
0pjeby15y
What makes you think I'm writing to the motives of specific people? If I were, I'd have named names (as I named Eliezer). In the post you were quoting, I was speaking in the abstract, about a particular fallacy, not attributing that fallacy to any particular persons. So if you don't think what I said applies to you, why are you inquiring about it? (Note: reviewing the comment in question, I see that I might not have adequately qualified "someone ... who argues" -- I meant, someone who argues insistently, not someone who merely "argues" in the sense of, "puts forth reasoning". I can see how that might have been confusing.) No, I'm well aware of those consequences. The natural consequence of confidently stating ANY opinion is to have some people agree and some disagree, with increased emotional response by both groups, compared to a less-confident statement. Happens here all the time. Doesn't have anything to do with the content, just the confidence. I wrote what I wrote because some of the people here who are Brucing via "epistemic" arguments will see themselves in my words, and maybe learn something. But if I water down my words to avoid offense to those who are not Brucing (or who are, but don't want to think about it) I lessen the clarity of my communication to precisely the group of people I can help by saying something in the first place.
2[anonymous]15y
Perhaps the reverse. By limiting your claims to the important ones, those that are actually factual, you reduce the distraction. You can be assured that 'Bruce' will take blatant fallacies or false claims as an excuse to ignore you. Perhaps they may respond better to a more consistently rational approach.
2pjeby15y
And if there aren't any, he'll be sure to invent them. ;-) Hehehehe. Sure, because subconscious minds are so very rational. Right. Conscious minds are reasonable, and occasionally rational... but they aren't, as a general rule, in charge of anything important in a person's behavior. (Although they do love to take credit for everything, anyway.)
-1Nick_Tarleton15y
No reason to make his job easier. No, but personally, mine is definitely sufficiently capable of noticing minor logical flaws to use them to irrationally dismiss uncomfortable arguments. This may be rare, but it happens.
4pjeby15y
Actually, my point was that I hadn't made any. Many of the objections that people are making are about things I never actually said. For example, some people insist on arguing with the ideas that: 1. teaching ability varies, and 2. teachers' beliefs make a difference to the success of their students. And somehow they're twisting these very scientifically supported ideas into me stating some sort of fallacy... and conveniently ignoring the part where I said that "If you're more interested in results than theory, then..." Of course if you do standardized teaching and standardized testing you'll get varying results from different people. But if you want to maximize the utility that students will get from their training, you'll want to vary how you teach them, instead, according to what produces the best result for that individual. That doesn't mean that you need to teach them different things, it's that you'll need to take a different route to teach them the same thing. A learning disability is not the same thing as a performance disability, and my essential claim here is that differences in applicability of anti-akrasia and other behavior change techniques are more readily explained as differences in learning ability and predispositions, than in differences in applicability of the specific techniques. I say this because I used to think that different things worked for different people (after all, so many didn't work for me!), and then I discovered that the problem is that people (like me) think they're following steps that they actually aren't following, and they don't notice the discrepancy because the discrepancies are "off-map" for them. That is, their map of the territory doesn't include one or more of the critical distinctions that make a technique work, like the difference between consciously thinking something, versus observing automatic responses, or the difference between their feelings and their thoughts about their feelings. If you miss one of th
8[anonymous]15y
On this topic your interpretation of those replying to you here is sometimes not the same as those typing the replies or to that of other observers. This includes the distortion of replies to fit the closest matching 'standard objection'. Were a rationalist sensei to 'accept that sort of bullshit' from a pupil then she would have failed them.
7Annoyance15y
Excellent comment. I have only two objections. First, this statement: is good on its merits, but I caution everyone to be careful about asserting that some technique or other is "something useful". There are plenty of reasons not to try any random thing that enters into our heads, and even when we're engaged in a blind search, we shouldn't suspend our evaluative functions completely, even though they may be assuming things that blinds us to the solution we need. They also keep us from chopping our legs off when we want to deal with a stubbed toe. My second objection deals with the following: What grounds are there for assigning EY the status of 'master'? Hopefully in a martial arts dojo there are stringent requirements for the demonstration of skill before someone is put in a teaching position, so that even when students aren't personally capable of verifying that the 'master' has actually mastered techniques that are useful, they can productively hold that expectation. When did EY demonstrate that he's a master, and how did he supposedly do so?
3thomblake15y
There really aren't, though one does need to jump through some hoops. That's part of what I like about this analogy.
1Annoyance15y
A lot of martial arts schools are more about "following the rules" and going through the motions of ritual forms than learning useful stuff. As has been mentioned here before multiple times, many martial artists do very poorly in actual fights, because they've mastered techniques that just aren't very good. They were never designed in light of the goals and strategies that people who really want to win physical combat will do. Against brutally effective and direct techniques, they lose. Humans like to make rituals and rules for things that have none. This is a profound weakness and vulnerability, because they also tend to lose sight of the distinction between reality and the rules they cause themselves to follow.
2MichaelVassar15y
There are no "things that have no rules". If there were, you couldn't perceive them in the first place in order to make up rules about them.
0Annoyance15y
Read that as "socially-recognized principles as to how something is to be done for things that physics permits in many different ways". Spill the salt, you must throw some over your shoulder. Step on a crack, break your mother's back. Games and rituals. When people forget they're just games, problems arise.
0jscn15y
This tendency can be used for good, though. As long as you're aware of the weakness, why not take advantage of it? Intentional self-priming, anchoring, rituals of all kinds can be repurposed.
-1Annoyance15y
Because repetition tends to reinforce things, both positive and negative. You might be able to take advantage of a security weakness in your computer network, but if you leave it open other things will be able to take advantage of it too. It's far better to close the hole and reduce vulnerability, even if it means losing access to short-term convenience.
0pjeby15y
...and most of those reasons are fallacious. The opposite of every Great Truth is another great truth: yes, you need to look before you leap. But he who hesitates is lost. (Or in Richard Bandler's version, which I kind of like better, "He who hesitates... waits... and waits... and waits... and waits...") I never said he did.
5hrishimittal15y
Even so, as a student, I do want the master to understand a complete theory of combat, complete with statistical validation against a control group. What is your theory o Master?
1pjeby15y
Understanding something doesn't necessarily mean you can explain it. And explaining something doesn't necessarily mean anyone can understand it. Can you explain how to ride a bicycle? Can you learn to ride a bicycle using only an explanation? The theory of bicycle riding is not the practice of how to ride a bicycle. Someone else's understanding is not a substitute for your experience. That's my only "theory", and I find it works pretty well in "practice". ;-)
0[anonymous]15y
Yes. Yes.
1pjeby15y
By only an explanation, I mean without practice, and without ever having seen someone ride one. And by "explain how to ride a bicycle", I mean, "provide an explanation that would allow someone to learn to ride, without any other information or practice." Oh, and by the way, you only get to communicate one way in your explanation or being the explainee. No questions, no feedback, no correcting mistakes. I thought these things would've been clear in context, since we were contrasting the teaching of martial arts (live feedback and practice) with the teaching of self-help (in one-way textual form). People expect to be able to learn to do a self-help technique in a single trial from a one-way explanation, perhaps because our brains are biased to assume they can already do anything a brain "ought to" be able to do "naturally".
1[anonymous]15y
Do they really expect to do that? Crazy kids.
4matt15y
So, you still need to know what's likely to be useful. You can waste a lot of time trying stuff that just isn't going to work. (And, just in case it wasn't clear - I am a long (long long) way from the belief that Eliezer is "a dumbass loser" (which you don't quite say, but it's a confusion I'd like to avoid).)
2pjeby15y
Either you have something better to do with your time or you don't. If you don't have something better, then it's not a waste of time. If you do have something better to do, but you're spending your time bitching about it instead of working on it, then trying even ludicrous things is still a better use of your time. IMO, the real waste of time is when people spend all their time making up explanations to excuse their self-created limitations.
2JamesCole15y
I'd also add: * there's heaps of stuff that's 'useful'. what matters is how useful it is - especially in relation to things that might be more useful. we all have limited time and (other) resources. it's a cost/benefit ratio. the good is the enemy of the great, and all that. * often it's unclear how useful something really is, you have to take this into account when you judge whether it's worth your while. and you also have to make a judgement about whether it's even worth your while to try evaluating it... coz there's always heaps and heaps of options and you can't spend your time evaluating them all.

When you spend time trying out the 1000 popular hacks doing you no good, then you lose. You lose all the time and energy invested in the enterprise, for which you could find a better use.

How do you know anything works, before even thinking about what in particular to try out? How much thought, and how much work is it reasonable to use for investigating a possibility? Intuition, and evidence. Self-help folk notoriously don't give evidence for efficacy of their procedures, which in itself looks like evidence of absence of this efficacy, a reason to believe ... (read more)

5pjeby15y
Anecdotal evidence is still evidence. Note that one of EY's rationality principles is that if you apply arguments selectively, then the smarter you get, the stupider you become. So, the reason I am referring to this cross-pollination of epistemic standards to an instrumental field as being "dumbass loser" thinking, is because as Richard Bach once put it, "if you argue for your limitations, then sure enough, you get to keep them." If you require that the "useful" first be "true", then you will never be the one who actually changes anything. At best, you can only be the person who does an experiment to find the "true" in the already-useful... which will already have been adopted by those who were looking for "useful" first.

I think that if there was such a straightforward hack like EY was looking for, he would know about it already. I just don't really believe that a hack like that exists, based on my admittedly meager readings in experimental psychology. Further, I think the idea of a "mind hack" is a cute metaphor, it can be misguided. Computer hackers literally create code that directs processes. We can at best manipulate our outside environment in ways that we hope will affect what is still a very mysterious brain. What EY's looking for would be the result of a ... (read more)

I am wondering, what are the good reasons for a rationalist to lose?

5steven046115y
* bad luck * if it's impossible to win (in that case, just lose less; a semantic difference) * if "winning" is defined as something else than achieving what you truly value That's all of them, I think. ETA: more in the context of this post, a good reason to lose at some subgoal is if winning at the subgoal can be done only at the cost of losing too much elsewhere.
1billswift15y
Another is failure of knowledge. It's possible simply not to know something you need to succeed, at the time you need it. No one can know everything they might possibly need to. It is not irrational, if you did not know that you would need to know beforehand.
-2Vladimir_Nesov15y
I exclude bad luck from this list, since winning might as well be defined over counterfactual worlds. If you lose in your real world, you can still figure out how well you'd do in the counterfactuals.
2Alicorn15y
Well-chosen risks turning out badly?
0bentarm15y
I'll give you odds of 2:1 against that this coin will come up heads...

Wow, I came late to this party.

One takeaway here is, don't reduce your search space to zero if you can help it. If that means that you have to try things without substantial evidence that they'll work, well, it's that or lose, and we're not supposed to lose.

I can think of a few situations where it'd make sense to reduce your search space to zero pending more data, though. The general rule for that seems to be that if you do allow that to happen, whatever reason you have for allowing that to happen is more important to you than the goal you're giving up by ... (read more)

On your reaction to "a way to reject the placebo effect", it's important to distinguish what we are trying to do. If all I care about is fixing a given problem for myself, I don't care whether I solve it by placebo effect or by a repeatable hack.

If I care about figuring out how my brain works, then I will need a way to reject or identify the placebo effect.

2billswift15y
You also need to avoid placebo effects if you want the hack to be repeatable (if you run into a similar problem again), generalizable (to work on a wider class of problems), or reliable.
1pjeby15y
Actually, it is important to separate certain kinds of placebo effects. The reason I use somatic marker testing in my work is to replace vague "I think I feel better"'s with "Ah! I'm responding differently to that stimulus now"'s. Technically, "I think I feel better" isn't really a placebo effect; it's just vagueness and confusion. The "real" placebo effect is just acting as if a certain premise were true (e.g. "this pill will make me better"). In that sense, affirmations, LoA, and hypnosis are explicit applications of the same principle, in that they attempt to set up the relevant expectation(s) directly. Similarly, Eliezer's "count to 10 and get up" trick is also a "placebo effect", in that it operates by setting up the expectation that, "after I count to 10, I'm going to get up".
0[anonymous]15y
An fMRI will tell you something different. No it isn't.
1pjeby15y
Really? There's a study where they compared those three things? And they controlled for whether the participants were actually any good at producing results with affirmations or LoA? If so, I'd love to read it. How do you figure that?
-3[anonymous]15y
A study Two of the four would be sufficient to refute your claim that the three listed are each applications of the the same principle as the placebo pill you compared them to. The studies need not be controlled by skill, they may be controlled by the actual measured effectiveness of the outcomes. If you are interested, you may begin your research here. You arbitrarily redefined what the "real placebo effect" is to your own convenience and then casually applied it to something that is not a placebo effect. Don't make me speak latin in a Scottish accent.
1pjeby15y
From Wikipedia's "placebo" page: So, how am I getting this wrong, exactly?
0jimrandomh15y
Taboo the phrase "placebo effect", please. That term was coined to refer to psychological effects intruding on non-psychological studies. When the goal is to achieve a psychological effect, it becomes meaningless or misleading.
4pjeby15y
You should probably read the earlier part of the thread, where I distinguished between what might be called "uncertainty effect" (thinking you're getting better, when you're not) and "expectation effect", where an expectation of success (or failure) actually leads to behavior change. This latter effect is functionally indistinguishable from the standard placebo effect, and is very likely to be the exact same thing. As you point out, we want expectation effects to occur. Affirmations, LoA, and hypnosis are all examples of methods specifically aimed at creating intentional expectation effects, but any method can of course produce them unintentionally. The main difference between expectation effect and "placebo classic" is that placebo classic loses its effect when somebody discovers that it's a placebo... well, actually that's still just another expectation effect, since people who take a real drug and think it's a placebo also react to it less. Everything we know about human beings points to expectation effects being incredibly powerful, but it seems relatively little research is devoted to properly exploiting this. Perhaps it's too useful to be considered high status, or perhaps not "serious enough".
1SoullessAutomaton15y
There's also the question of to what extent the placebo effect is actually meaningful when "causing effects in the mind" is the goal.

The approach laid out in this post is likely to be effective if, your predominant goal is to find a collection of better performing akrasia and willpower hacks.

If, however, finding such hacks is only a possible intermediate goal, then different conclusions can be reached. This is even more telling if improved willpower and akrasia resistance is your intermediate goal - regardless of whether you choose hacks or some other method for realizing it.

Another bad reason for rationalists to lose is to try to win every contest placed in front of them. Choosing your battles is the same as choosing your strategies, just at a higher scale.

The upvotes / comment ratio here is remarkably high. What does that mean?

5SilasBarta15y
Well, it looks like I'm an extreme outlier on this one, because I actually voted it down because I thought it got a lot wrong, and for bad reasons. First of all, despite criticizing EY for "needing" things that would merely be supercool, matt lists a large number of things that would also be merely supercool: it just doesn't seem like you need all of those chance values either. Second, matt seemed to miss why EY was asking for all of that information: that presenting a "neato trick" that happens to work, provides very little information as to why it works, and when it should be used, etc. EY had explained that he personally went through such an experience and described what is lacking when you don't provide the information he asked for. In short, EY provided very good reasons why he should be skeptical of just trying every neato trick, matt said very little that was responsive to his points.
2matt15y
Yah, good point - those are meant to be discussion points, but that's not really very clear as written. I don't mean to imply that we need everything in the lists, but to characterize the sort of thing we should be looking for. No, I don't think that's right. Eliezer is presenting as needful lots of stuff that he's just not going to get. That seems to be leading him not to try anything until he finds something that passes through his very tight filter. I'm claiming that the relevant filter should be built on expected utility, and that there is pretty good information available (most of the stuff in the lists can at least be estimated with little time invested) that would lead him to try more hacks than the none likely to pass his filter. I'm very not suggesting that you should try "every neato trick". I am suggesting that high expected utility is a better filter than robust scientific research. If you have robust research available you should use it. When you don't, have a look through my lists and see whether it's worth trying something anyway. You might manage a win.
5Alicorn15y
Maybe it means the post was upvoted for agreement, and people don't have much to add, and don't want to just say "yay! good post!"?
1Mike Bishop15y
Could there be a connection to the recent slowing of the rate of new posts to LW?

Shouldn't this be in the domain of psychological research? The positive psychology movement seems to have a large momentum and many young researchers are pursuing a lot of lines of questioning in these areas. If you really want rigorous, empirically verified, general purpose theory, that seems to be the best bet.

It IS important to note individual variation. If someone has a fever that's easily cured by a specific drug, but they tell you that they have a rare, fatal allergy to that medication, you don't give the drug to them anyway on the grounds that it's "unlikely" it'll kill them.

Similarly, if a particular drug is known not to have the 'normal' effect in a patient, you don't keep giving it to them in hopes that their bodies will suddenly begin acting differently.

The key is to distinguish between genuine feedback of failure, and rationalization. THIS ... (read more)

4zaph15y
Perhaps you could write an article discussing the ways the differences between rationality and rationalization can be identified? I for one would find it useful. I find myself using rationalizations that mask themselves as rationality (often too late), and it would help me to do that less.
3conchis15y
So enlighten us (please). EDIT: For the avoidance of doubt, this is not intended as sarcasm.