I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here.
Sarah Constantin: I really liked this example of an introspective process, in this case about the "life problem" of scheduling dates and later canceling them: malcolmocean.com/2021/08/int…
Eliezer Yudkowsky: See, if I'd noticed myself doing anything remotely like that, I'd go back, figure out which steps of thought were actually performing intrinsically necessary cognitive work, and then retrain myself to perform only those steps over the course of 30 seconds.
SC: if you have done anything REMOTELY like training yourself to do it in 30 seconds, then you are radically smarter/more able/etc than me and all the other people who do slower introspective practices.
SC: I don't know whether to be impressed or to roll to disbelieve.
EY: I mean I suspect that this actually requires something like a fast perceptual view of minds as engines and thoughts as doing work and like actually draws on my mind design knowledge, but, even so, I ask: Do you constantly look back and ask "How could I have thought that faster?"
SC: No, I've never asked that.
EY: Okay, well, every time I'm surprised by reality I look back and think "What about my model and my way of thinking could I change that would have predicted that better, without predicting a bunch of other things worse?"
EY: When somebody at a MIRI workshop comes up with a math proof, I look over it and ask if there's a way to simplify it. Usually, somebody else does beat me to inventing a proof first; but if my intuition says it was too complicated, I often am first to successfully simplify it.
EY: And every time I complete a chain of thought that took what my intuition says was a lot of time, I look back and review and ask myself "How could I have arrived at the same destination by a shorter route?"
EY: It's not impossible that you have to be Eliezer Yudkowsky for this to actually work - I am never sure about that sort of thing, and have become even less so as time goes on - but if AI timelines were longer I'd tell somebody, like, try that for 30 years and see what happens.
EY: Man, now I'm remembering when I first started doing this consciously as a kid. I called it Shortening the Way, because a rogue rabbi had recently told me that "Kwisatz Haderach" was actually a reference to a Kabbalistic concept about teleportation, so that term was on my mind.
Eh, I feel like this is a weird way of talking about the issue.
If I didn't understand something and, after a bunch of effort, I managed to finally get it, I will definitely try to summarize the key lesson to myself. If I prove a theorem or solve a contest math problem, I will definitely pause to think "OK, what was the key trick here, what's the essence of this, how can I simplify the proof".
Having said that, I would NOT describe this as asking "how could I have arrived at the same destination by a shorter route". I would just describe it as asking "what did I learn here, really". Counterfactually, if I had to solve the math problem again without knowing the solution, I'd still have to try a bunch of different things! I don't have any improvement on this process, not even in hindsight; what I have is a lesson learned, but it doesn't feel like a shortened path.
Anyway, for the dates thing, what is going on is not that EY is super good at introspecting (lol), but rather that he is bad at empathizing with the situation. Like, go ask EY if he never slacks on a project; he has in the past said he is often incapable of getting himself to work even when he believes the work is urgently necessary to save the world. He is not a person with a 100% solved, harmonic internal thought process; far from it. He just doesn't get the dates thing, so assumes it is trivial.
I mean, yeah, they're different things. If you can figure out how to get to the correct destination faster next time you're trying to figure something out, that seems obviously useful.