This tendency can be used for good, though. As long as you're aware of the weakness, why not take advantage of it? Intentional self-priming, anchoring, rituals of all kinds can be repurposed.
Because repetition tends to reinforce things, both positive and negative.
You might be able to take advantage of a security weakness in your computer network, but if you leave it open other things will be able to take advantage of it too.
It's far better to close the hole and reduce vulnerability, even if it means losing access to short-term convenience.
Reply to: Practical Advice Backed By Deep Theories
Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.
Eliezer has suggested that, before he will try a new anti-akraisia brain hack:
This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.
I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.
So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?
(can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?)
… and, what don't we need?
How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?