It might be worth separating the claim "Eliezer is wrong about what changes he, personally, should try" from the claim
"It is generally good to try many plausible changes, because:
The second claim seems fairly clearly right, at least for some of us. (People may vary in how easily they can try on new approaches, and on what portion of handed-down approaches work for them. OTOH, the ability to easily try new approaches is itself learnable, at least for many of us.) The first claim is considerably less clear, particularly since Eliezer has much data on himself that we do not, and since after trying many hacks for a given not-lightcone-destroying problem without any of the hacks working, expected value calculations can in fact point to directing one’s efforts elsewhere.
Maybe we could abandon Eliezer’s specific case, and try to get into the details of: (a) how to benefit from trying new approaches; and (b) what rules of thumb for what to try, and what to leave alone, yield high expected life-success?
Awesomely summarized, so much so that I don't know what else to say, except to perhaps offer this complementary anecdote.
Yesterday, I was giving a workshop on what I jokingly call "The Jedi Mind Trick" -- really the set of principles that makes monoidealism techniques (such as "count to 10 and do it") either work or not work. Towards the end, a woman in the group was having some difficulty applying it, and I offered to walk through an example with her.
She picked the task of organizing some files, and I explained to her what to say and picture in her mind, and asked, "What comes up in your mind right now?"
And she said, "well, I'm on a phone call, I can't organize them right now." And I said "Right, that's standard objection #1 - "I'm doing something else". So now do it again..." [I repeated the instructions]. "What comes to mind?"
She says, "Well, it's that it'll be time to do it later".
"Standard objection #2: it's not time right now, or I don't have enough time. Great. We're moving right along. Do it again. What comes to mind?"
"Well, now I'm starting to see more of what I'd actuall...
He's tried, or he wouldn't have had the material to make those posts.
I appreciate your comments, and they're a good counterpoint to EY's point of view. But the fact that you need to make an assumption in order to be an effective teacher, because it's true most of the time, doesn't mean it's always true. You are making an expected-value calculation as a teacher, perhaps subconsciously:
You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.
The specific comments Eliezer has made, about people erroneously assuming that what worked for them should work for other people, were taken from real life and were, I think, also true and ...
You are making an expected-value calculation as a teacher, perhaps subconsciously
No. I'm making the assumption that, until someone has actually tried something, they aren't in a position to say whether or not it works. Once someone has actually tried something, and it doesn't work, then I find something else for them to do. I don't give up and say, "oh, well I guess that doesn't work for you, then."
When I do a one-on-one consult, I don't charge someone until and unless they get the result we agree on as a "success" for that consultation. If I can't get the result, I don't get paid, and I'm out the time.
Do I make sure that the definition of "success" is reasonably in scope for what I can accomplish in one session? Sure. But I don't perform any sort of filtering (other than that which may occur by selection or availability bias, e.g. having both motivation and funds) to determine who I work with.
You are also taking EY's claim that not every technique works well for every person, and caricaturing it as the claim that there is a 1-1 correspondence between people and techniques that work for them. He never said that.
I didn't say he did, or tha...
When you spend time trying out the 1000 popular hacks doing you no good, then you lose. You lose all the time and energy invested in the enterprise, for which you could find a better use.
How do you know anything works, before even thinking about what in particular to try out? How much thought, and how much work is it reasonable to use for investigating a possibility? Intuition, and evidence. Self-help folk notoriously don't give evidence for efficacy of their procedures, which in itself looks like evidence of absence of this efficacy, a reason to believe ...
I think that if there was such a straightforward hack like EY was looking for, he would know about it already. I just don't really believe that a hack like that exists, based on my admittedly meager readings in experimental psychology. Further, I think the idea of a "mind hack" is a cute metaphor, it can be misguided. Computer hackers literally create code that directs processes. We can at best manipulate our outside environment in ways that we hope will affect what is still a very mysterious brain. What EY's looking for would be the result of a ...
Wow, I came late to this party.
One takeaway here is, don't reduce your search space to zero if you can help it. If that means that you have to try things without substantial evidence that they'll work, well, it's that or lose, and we're not supposed to lose.
I can think of a few situations where it'd make sense to reduce your search space to zero pending more data, though. The general rule for that seems to be that if you do allow that to happen, whatever reason you have for allowing that to happen is more important to you than the goal you're giving up by ...
On your reaction to "a way to reject the placebo effect", it's important to distinguish what we are trying to do. If all I care about is fixing a given problem for myself, I don't care whether I solve it by placebo effect or by a repeatable hack.
If I care about figuring out how my brain works, then I will need a way to reject or identify the placebo effect.
The approach laid out in this post is likely to be effective if, your predominant goal is to find a collection of better performing akrasia and willpower hacks.
If, however, finding such hacks is only a possible intermediate goal, then different conclusions can be reached. This is even more telling if improved willpower and akrasia resistance is your intermediate goal - regardless of whether you choose hacks or some other method for realizing it.
Another bad reason for rationalists to lose is to try to win every contest placed in front of them. Choosing your battles is the same as choosing your strategies, just at a higher scale.
Shouldn't this be in the domain of psychological research? The positive psychology movement seems to have a large momentum and many young researchers are pursuing a lot of lines of questioning in these areas. If you really want rigorous, empirically verified, general purpose theory, that seems to be the best bet.
It IS important to note individual variation. If someone has a fever that's easily cured by a specific drug, but they tell you that they have a rare, fatal allergy to that medication, you don't give the drug to them anyway on the grounds that it's "unlikely" it'll kill them.
Similarly, if a particular drug is known not to have the 'normal' effect in a patient, you don't keep giving it to them in hopes that their bodies will suddenly begin acting differently.
The key is to distinguish between genuine feedback of failure, and rationalization. THIS ...
Reply to: Practical Advice Backed By Deep Theories
Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.
Eliezer has suggested that, before he will try a new anti-akraisia brain hack:
This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.
I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.
So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?
(can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
… and, what don't we need?
How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?