Worth noting that the reason SquirrelInHell is dead is that they committed suicide after becoming mentally unstable, likely in part due to experimentation with exotic self-modification techniques. This one in particular seems fine AFAICT, but, ya know, caveat utilitor.
This seems reasonable to note; at the same time, I think that a lot of people who end up badly after experimenting with exotic self-modification techniques do so despite rather than because of the techniques.
This technique seems best if your problem is that your thoughts tend to often go down loopy, unproductive, distressing paths, in a way that you can self-diagnose with confidence. Which is totally a real thing! I used to find my brain making up imaginary offenses people had committed against me, and I would feel angry or vindictive for a moment. Fortunately I developed a thought pattern that immediately just notes “… and that NEVER ACTUALLY HAPPENED,” and then I move on from the moment. That’s a situation where it’s really easy to notice a bad thought pattern and change it, cutting out any real world action. And once I’d done it a couple times, I started noticing this as an overall cognitive strategy.
Another example is from my work as an engineer. During my first year or so doing research, I noticed several bad patterns of thought and behavior: throwing things out prematurely when I’d make a mistake, doing overly complex mental math, and trying to emergency correct mistakes rather than going to my desk and working out an actual plan of solution.
But in these cases, while “noticing my thoughts” was key to the solution, because it interrupted a bad pattern of behavior, it was noting the bad outcome, then working backwards to a specific root cause that got me there. Continuously monitoring my stream of thoughts was not part of this process. It seems like a technique of continuous thought-monitoring would be more important if the problem you were having was with your thoughts themselves. If your problem manifests as behavior, then paying attention to the stream of behavior and figuring out the root cause seems best.
Yeah, I considered explicitly leaving that note at the beginning but felt like this was just sufficiently different from the thing that led to their suicide that adding "WARNING! BUT ALSO I'M NOT THAT WORRIED?" didn't seem overall worth it.
Romeosteven's comment updates me a bit, though my current guess is this is still a fairly different reference class of problem (and the post comes with it's own warnings about the thing romeo is pointing at, assuming I understand it properly)
Man, it does make me sad that whenever I bring up this technique, there’s an obligatory version of this conversation.
That's understandable. But it does seem like the sort of thing I'd want to hear about before trying such a technique. Hopefully people can take it for what it's worth.(i.e I don't think we should automatically discount such techniques or anything)
I think that's somewhat reasonable in this case, but, want to flag that it should be possible at somepoint to reach an epistemic state where you can say "okay, yeah, it was mostly coincidence, or at least not relevant, that this happened to this person." Like, if someone invented a car, and then used the car to commit suicide driving over a cliff, you might go "holy shit maybe I should be worried about cars and suicide?", and if you didn't know much about cars maybe this would be a reasonable thing to worry about at first. But, like, it shouldn't be the case that forever after whenever someone sells a car they warn you that the guy who invented cars used them to commit suicide. It's privileging a hypothesis.
I think in this case it's less crazy than in the car case to worry about that, but, I do want to push back against the impulse to always have a disclaimer here.
In cases like this I strongly prefer to be given the facts (or at least pointed toward them) and allowed to make my own judgment as to how relevant they are.
Whether you choose to join the conversation and present the argument for their irrelevance is up to you, but sharing all the facts that your audience might consider important, rather than deciding for them that some apparently-relevant ones are best left unsaid, is IMO more respectful and reduces the risk of doing preventable harm in cases where your judgment is mistaken.
In the car case I think it's obvious that car usage is not causally upstream of suicidality. If the inventor of the car died in a car accident, I do think that would be a relevant data point about the safety of cars, albeit not one that needs to be brought up every time. And in the real world, we do pretty universally talk about car crashes and how to avoid them when we're teaching people to drive. From that perspective romeosteven's comment is probably better and mine just got more upvotes because of the lurid details. (although, tail risks are important. And I think there's a way in which the author's personality can get imprinted in a text which makes the anecdote slightly more relevant than in the car case)
Is your worry more about "maybe this technique is more dangerous than it looks?" or "maybe people will follow up on this by generally following SquirrelInHell's footsteps, and maybe not all those footsteps are safe?"
More the latter. Or more like, doing things like this technique too much/too hard could be dangerous.
I think that might be true, but, at that level, I think it kinda makes more sense to put the warning over, like, the entirety of rationality techniques, and I think singling ones that SquirrelInHell wrote up doesn't actually seem like the right abstraction.
Like, I do generally think there's a failure mode to fall into here. I don't think SquirrelInHell is the only person to have fallen into it.
This post does seem like it warrants some specific warnings (which the original post already included). But I think those warnings are mostly unrelated to what ultimately went wrong.
Source/evidence? I believe you but this seems worth checking.
Worth noting that both this and the fixing the motor cortex skill they advocate are very closely related to traditional buddhist insight practices and that without supporting emotional integration (Tune Your Emotional Processing, with Focusing as the particular version that Squirrelinhell advocated though a variety of self therapy modalities can work) it can be destabilizing.
I'm interested in more details about the failure modes to watch out for here. i.e. what sort of things might you notice happening to you if you were en route to being destabilized?
The post does explicitly warn about this, but I happened to a) already have some flavor of focusing by the time I started, and b) never actually ran at it that hard, so, I might still be underestimating how worried to be about it despite the warnings.
One possible issue that comes to mind is that if you start paying more attention to the low-level movements of your thoughts, you might start noticing thoughts that parts of you get triggered by, e.g. if they feel like particular kinds of thoughts are shameful to have. One concrete failure mode that I think many rationalists would be susceptible to, would be to notice something like
blank mind -> noticing having a blank mind -> verbal thought "my mind is blank" -> feeling of despair -> blank mind -> ...
and then feeling additional despair and shame over your mind being stuck in an unproductive cycle and feeling that you should be able to do better. That may then create another layer of shame and despair on top of the original one. Although the original instructions say that you shouldn't use this to police your mind, getting triggered in this way may create a compulsion to do so anyway.
Another could be mysterious feelings of dread and feeling bad, if you started noticing various thoughts/emotions that parts of you had been trying to block. Though I would expect that the most natural consequence of that would be you just losing the motivation to use the technique pretty rapidly, with it becoming another of those "that felt really useful but for some reason I don't feel any interest in doing it anymore, shrug" things.
I think the main risk there would be if you had used this technique extensively enough to build up an increased introspective awareness that was harmless at first but then started catching more of whatever blocked trauma you had and had by that point been built up sufficiently that just stopping the practice wasn't enough to bring it down anymore. That kind of a scenario would be similar to the cases where people start getting trauma symptoms from doing mindfulness practices; if one has already tried that kind of a thing before and hasn't felt bad, then it might be an indication (on top of the base rate, which I think is reasonably low) that it's low-risk.
There's also the fact that the thought processes themselves may be protecting you from various traumas or doing other subconscious things for you. Since this tuning process isn't based on introspection but on conscious judging of your subconscious processes, you could accidentally tune yourself away from emotionally load-bearing coping strategies.
I meant that emotional integration (like focusing) is helpful for avoiding destabilization.
I would say the signs are the normal sort you 'd see in mental health breakdowns:
Depression, social withdrawal
Hostility or suspiciousness, extreme reaction to criticism
Deterioration of personal hygiene
Flat, expressionless affect
Inability to cry or express joy or inappropriate laughter or crying
Oversleeping or insomnia; forgetful, unable to concentrate
Odd or irrational statements; seeming difficulty with communicating in a normal way
One of my "responsible use" notes in "How To Observe Abstract Objects" seems directly relevant here:
However, a few people seem to have an overall cognitive strategy that crucially depends on not looking at things too closely (or something like that), and this is actively bad for some of them. If you try this for a minute and hate it, especially in an “I feel like I’m going crazy” kind of way, I do not recommend continuing. Go touch some grass instead. I’ve never seen this cause damage in just a few minutes (or at all, as far as I can tell), but I do think there’s a danger of dismantling somebody’s central coping mechanism if they push past their own red flags about it over and over again, or for a whole hour at once.
The "notice something new" exercise in that post is extremely similar to "pay attention to the delta between thoughts". Seems to me that it's directing attention toward the same psychological event type, just not in the context of attempting to solve a problem.
As of writing, I have spent about four months experimenting with the Tune Your Cognitive Strategies (TYCS) method and I haven't gotten any visible direct benefits out of it.
Some of the indirect benefits I've gotten:
The biggest thing I've learned is that better introspective ability and awareness seems to be the most load-bearing skill underlying TYCS. I'm less enthusiastic about the notion that you can 'notice your cognitive deltas' in real-time almost all the time -- this seems quite costly.
Note that Eliezer has also described that he does something similar. And more interestingly, it seems like Eliezer prefers to invest in what I would call 'incremental optimization of thought' over 'fundamental debugging':
EY: Your annual reminder that you don't need to resolve your issues, you don't need to deal with your emotional baggage, you don't need to process your trauma, you don't need to confront your past, you don't need to figure yourself out, you can just go ahead and do the thing.
On one hand, you could try to use TYCS or Eliezer's method to reduce the cognitive work required to think about something. On the other hand, you could try to use integration-based methods to solve what I would consider 'fundamental issues' or deeper issues. The latter feels like focusing on the cognitive equivalent of crucial considerations, the the former feels like incremental improvements.
And well, Eliezer has seemed to be depressed for quite a while now, and Maia Pasek killed herself. Both of these things I notice seem like evidence for my hypothesis that investing in incremental optimization of the sort that is involved in TYCS and Eliezer's method seems less valuable than the fundamental debugging that is involved in integration / parts-work mental techniques, given scarce cognitive resources.
For the near future, I plan to experiment with and use parts-work mental techniques, and will pause my experimentation and exploration of TYCS and TYCS-like techniques. I expect that there may be a point at which one has a sufficiently integrated mind such that they can switch to mainly investing in TYCS-like techniques, which means I'll resume looking into these techniques in the future.
If you are willing to share, can you say more about what got you into this line of investigation, and what you were hoping to get out of it?
For my part, I don't feel like I have many issues/baggage/trauma, so while some of the "fundamental debugging" techniques discussed around here (like IFS or meditation) seem kind of interesting, I don't feel too compelled to dive in. Whereas, techniques like TYCS or jhana meditation seem more intriguing, as potential "power ups" from a baseline-fine state.
So I'm wondering if your baseline is more like mine, and you ended up finding fundamental debugging valuable anyway.
I'm not mesaoptimizer, but, fyi my case is "I totally didn't find IFS type stuff very useful for years, and the one day I just suddenly needed it, or at least found myself shaped very differently such that it felt promising." (see My "2.9 trauma limit")
If you are willing to share, can you say more about what got you into this line of investigation, and what you were hoping to get out of it?
Burnt out after almost an year of focusing on alignment research. I wanted to take a break from alignment-ey stuff and also desired to systematically fix the root causes behind the fact that I hit what I considered burn-out.
I don’t feel like I have many issues/baggage/trauma
I felt similar when I began this, and my motivation was not to 'fix issues' in myself but more "hey I have explicitly decided to take a break and have fun and TYCS seems interesting let's experiment with it for a while, I can afford to do so".
I think it's worth sharing here some details about SquirrelInHell's suicide, specifically to point out to new people that Cognitive Tuning was not what killed SquirrelInHell.
This comment is from Slimepriestess, who is a friendly former-Zizian. I wouldn't necessarily trust 100% of everything said by a former Zizian (and who should definitely not be treated as a pariah). But it's pretty well known that SquirrelInHell was doing a ton of over-the-top shit at once (e.g. simultaneously attempting to use dolphin-like sleep deprivation to turn half of their brain into Lawful Evil and the other half into Transgender Good), and was simultaneously hanging around a bunch of violent and dangerous people, and they were all doing hardcore Roko's Basilisk research.
imo, Maia was trans and the components of her mind (the alter(s) they debucketed into "Shine") saw the body was physically male and decided that the decision-theoretically correct thing to do was to basically ignore being trans in favor of maximizing influence to save the world. Choosing to transition was pitted against being trans because of the cultural oppression against queers. I've run into this attitude among rationalist queers numerous times independently from Ziz and "I can't transition that will stop me from being a good EA" seems troubling common sentiment.
Prior to getting involved with Ziz, the "Shine" half of her personality had basically been running her system on an adversarial 'we must act or else' fear response loop around saving the multiverse from evil using timeless decision theory in order to brute force the subjunctive evolution of the multiverse.
So Ziz and Squirrel's start interacting, and at that point the "Maia" parts of her had basically been like, traumatized into submission and dissociation, and Ziz intentionally stirs up all those dissociated pieces and draws the realization that Maia is trans to the surface. This caused a spiraling optimization priority conflict between two factions that ziz had empowered the contradictory validity of by helping them reify themselves and define the terms of their conflict in her zero sum black and white good and evil framework.
But Maia didn't kill them, Shine killed them. I have multiple references that corroborate that. The "beat Maia into submission and then save the world" protocol that they using cooked out all this low level suicidality and "i need to escape, please where is the exit how do i decision-theoretically justify quitting the game?" type feelings of hopelessness and entrapment. The only "exit" that could get them out of their sense of horrifying heroic responsibility was by dying so Shine found a "decision theoretic justification" to kill them and did. "Squirrel's doom" isn't just "interhemispheric conflict" if anything it's much more specific, it's the specific interaction of:
"i must act or the world will burn. There is no room for anything less than full optimization pressure and utilitarian consequentialism"
vs
"i am a creature that exists in a body. I have needs and desires and want to be happy and feel safe"
This is a very common EA brainworm to have and I know lots of EAs who have folded themselves into pretzels around this sort of internal friction. Ziz didn't create Squirrel's internal conflict she just encouraged the "good" Shine half to adversarially bully the evil "Maia" half more and more, escalating the conflict to lethality.
Generally, I think people should be deferring to Raemon on the question of "is Cognitive Tuning safe?" and should, at minimum, message him to get his side of the story. This situation is a really big deal; if Cognitive Tuning works, that's successful human intelligence augmentation, that is world-saving shit. Cognitive Tuning alone could become an entire field of intelligence augmentation, AND something that anyone with average intelligence can contribute heavily towards, since having a more typical mind will yield more insights that can be picked up and worked with by other people with more typical minds).
Another thing I notice after a few years of using this:
The OP says:
- Your brain already has the ability to update its cognitive strategies (this is called "meta-cognitive reinforcement learning"). However, the usual mechanism works with unnecessary levels of indirection, as in:
- Cognitive strategy -> Thought -> Action -> Reward or punishment
- You get rewarded or punished for what you do (as measured by your brain's chemical responses). Good thoughts are more likely to be followed by good actions. Good cognitive strategies are more likely to generate good thoughts. On average, your brain will slowly update its cognitive strategies in the right direction.
- Cognitive strategy -> Thought -> Reward or punishment
- You have learned to be happy or unhappy about having certain ideas, even when you don't yet know how they apply to the real world. Now your brain gets rewarded or punished for thoughts, and on average good thoughts are more likely to be generated by good cognitive strategies. Your brain can update cognitive strategies faster, according to heuristics about what makes ideas "good".
- However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!):
- Cognitive strategy -> Reward or punishment
- You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts.
- Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience.
I think the author thinks of this as the primary insight here (i.e. getting to: "Cognitive strategy -> reward/punishment"). And... I'll be honest, I think this works and it makes sense to me, but it doesn't work so obviously that I'm like "yes this underlying theory definitely checked out."
But what I think is both more obvious, and still a useful stepping stone, is transitioning more from "Cognitive strategy -> Thought -> Action -> Reward or punishment" to "Cognitive strategy -> Thought -> Reward or punishment". A lot of my thoughts are obviously dumb (or useful) upon first glance. And shifting how much of my feedback loop happened within ~3 seconds vs longer timescales still seems very helpful.
Does anyone who knew SquirrelInHell know the subskills in the skill tree they never got around to writing?
EDIT: To clarify, are there any known skills which are equivalent to the Red subskills in BWT's skill tree? I am very impressed with the exposition on BWT, and would guess the remaining skills were just as high value. Perhaps more than I'd naively guess, if there's some synergy between them. If you think you know them, please speak out so we can get the complete BWT skillset.
I didn't know them and can only speak to how I did the tuning ontology thing. For about 2 weeks, I noted any time I was chunking reasoning using concepts. Many of them familiar LW concepts, and lots of others from philosophy, econ, law, common sense sayings, and some of my own that I did or didn't have names for. This took a bit of practice but wasn't that hard to train a little 'noticer' for. After a while, the pace of new concepts being added to the list started to slow down a lot. This was when I had around 250 concepts. I then played around with the ontology of this list, chunking it different ways (temporal, provenance, natural seeming clusters of related concepts, domain of usefulness, etc.). After doing this for a bit it felt like I was able to get some compressions I didn't have before and overall my thinking felt cleaner than before. Separately, I also spent some time explicitly trying to compress concepts into as pithy as possible handles using visual metaphors and other creativity techniques to help. This also felt like it cleaned things up. Compression helps with memory because chunking is how we use working memory for anything more complicated than atomic bits of info. Augmenting memory also relied on tracking very closely whether or not a given representation (such as notes, drawing etc.) was actually making it easier to think or was just hitting some other easily goodharted metric, like making me feel more organized etc.
With regard to 'tracking reality with beliefs' the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).
With regard to 'tracking reality with beliefs' the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).
This seems like a potentially quite helpful concept to me.
I'd be interested in more details of how you go about checking for degrees of freedom.
I think when I do this sort of sanity-checking for myself, things I sometimes do include "wait, why do I believe this in the first place?" and "consider the world where the opposite is true, how would I know?" but those seem like different mental motions.
Easiest is a fictional dialog between a pro and anti position person. The anti person brings counter evidence and then gets to see how the pro position responds. If they respond by remapping the moving parts of the model in a different way, that indicates extra degrees of freedom. Then you can have an easier time noticing when you are doing this same move, ie back peddling and trying to 'save' a position when someone gives you push back on it.
I think that list would be very helpful for me.
Can you form a representative sample of your "list"? Or send the whole thing, if you have it written down.
partially exists here, but very little explanation https://conceptspace.fandom.com/wiki/List_of_Lists_of_Concepts
This is neat.
Did you write all that or who did?
(EDIT:) This taxonomy seems especially nice. Basically each point there would need examples and exercises and then that would be a pretty cool problem solving toolkit training program.
Thanks, I wrote it and found the process of recording my thoughts and organizing them to be helpful.
My unedited notes while reading this post, including an initial exercise log:
"Your cognition is much more powerful than just the part you have conscious access to, and it's crucial to make good use of it."
heck yeah
"A small tweak to how your brain processes information in general is worth more than a big upgrade to your conscious repository of cognitive tricks."
"More creativity and good ideas just "popping into your head"."
"Once you realize exactly what is and what isn't under your conscious control, you stop beating yourself about not doing the impossible."
What does it mean to "tune" your "cognitive strategies"?
"Having good quality thinking happen effortlessly and automatically is great... unless you are a control freak, in which case you should Tune Your Emotional Processing before even reading this page."
"How to tell if you have it?"
"When you don't like whatever has risen up to the top of the cauldron, the last thing you want is to try to "fix it". You only have access to the topmost layer, so it would be hopelessly ineffective anyway. But it's much worse than that - by attempting to "fix" your cognition, you stop being able to see how it works. How well your cognition works is shown not by what thoughts you have at the moment, but rather by the pattern of how one or more thoughts combine into a new thought ("cognitive strategy"). Instead, you want to learn as much as possible about the differences ("deltas") between each thought and the next, as they occur to you."
meta: i appear to be halfway through the post and part of me is still waiting for the post to start because it's happening in the form of bullet points, which apparently i categorize as "part of an introduction, not the body of a post". but actually i think this just is the post.
"However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!): Cognitive strategy -> Reward or punishment You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts. Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience."
I just reread this.
Since writing this post I’ve tried to do this in workshops a few time. People struggled a lot with it. One thing I noticed here was Logan is pretty skilled at the related subskills here, and it still requires a lot of attention and iteration to grok it and get the hang of it.
I’m not sure whether I grokked the skill or not when I first did it. I think I was doing a cruder thing that was still really helpful. I’m honestly still not sure whether the thing with the deltas is helpful over the raw stream of thoughts.
After iterating in workshops a bit, I now start people off with ‘load the puzzle up, and then notice the very first thing that pops into your mind and then stop. And then look at it a bit. And then go back to the puzzle again and notice the first two things that happen in your mind, and stop. And only then go on to observing yourself as you solve the puzzle.
Previous discussion: https://www.lesswrong.com/posts/hGtBH7SJy6Y2SmAj6/tune-your-cognitive-strategies
Longevity-wise, https://squirrelinhell.blogspot.com/ should be up indefinitely since AFAIK, Blogspot/Blogger has no nasty deletion policies (although I have not checked specifically, they are one of the oldest blog hosts on the Internet, and apparently they are considered safe from Google axing because they are used internally so much for Google official posting). http://bewelltuned.com/ seems to duplicate a lot of the content, and the copyright date suggests most of it has been there for at least several years, and it looks easily crawled, so it should be well-archived.
The way I've personally used this technique/practice is to have a lapscreen, with two pages side by side – one as a notebook where I can jot thoughts down, and one with whatever puzzle I'm trying to solve. (I found brilliant.org to be a good source of puzzles)
I try to jot thoughts down as I have them (often with very rough notes that only make sense to me since trying to write down too much would slow down the process too much)
The post emphasizes noticing thoughts at the sub-second level. Obviously, writing out a focus-handle for 5 different thoughts in the space of a second isn't practical. But what I do here is often let myself have a few thoughts/impulses in a row, then go back and try to notice/remember them all, and then write them down after-the-fact in an attempt to crystallize them and reinfoce the noticing process.
Do you think having a well-defined puzzle (like a math problem) is a better way to make the usefulness of this technique clear?
A lot of what I work on are more open-ended questions like trying to remember how techniques work or concepts are about (ie ANOVA). In these cases, the process is more about recalling or reconstructing various insights, definitions and equations, with no clear stopping point. I’m wondering if I’ve been trying to apply this cognitive tuning technique to a problem it’s not well suited for?
I think the technique is relevant to basically all cognition, but working on well-defined problems is useful for the "figure out if it's actually helping" and "fine-tune your approach to ensure you're using it usefully".
(When I use this technique for more open-ended problems I think it's still useful to have two screen-pages open, one of which is still more for rough-unstructured notes and one of which is more for "here's my distillation of my current understanding of the problem."
I'd give this a +9 if I could*. I've been using this technique for 7 years. I think it's clearly paid off in "clear, legible lessons about how to think." But the most interesting question is "did the subtler benefits pay off, in 7 years of practice?"
Let's start with the legible
This was essentially the first step on the path towards Feedbackloop-first Rationality. The basic idea here is "Watch your thoughts as they do their thinking. Notice where your thoughts could be better, and notice where they are particularly good. Do more of that."
When I've ran this exercise for groups of 20 people, typically 1/4 of them report a noticeable effect size of "oh, that showed me an obvious way to improve my thinking." (I've done this 3x. I've also run it ~3 times for smaller groups and where most people didn't didn't seem to get it, which led me to eventually write Scaffolding for "Noticing Metacognition", which people seemed to have an easier time with)
I've picked up a lot of explicit cognitive tricks, via this feedbackloop. Some examples:
But, the essay promises more:
A small tweak to how your brain processes information in general is worth more than a big upgrade to your conscious repository of cognitive tricks.
[...] More creativity and good ideas just "popping into your head". There's no magic to it! Once you understand how the process works, it can be optimized for any purpose you choose.
Most people already have a thinking style built on top of excessive conscious cognitive effort. This often involves relying on side-effects of verbal and conscious thoughts, while mistakenly assigning the full credit for results to those effortful thoughts.
When you already have some conscious/verbal thoughts, it is tempting to imagine they are the only result of your thinking, and then try to pick up from there. But this is limiting, because the most power is in whatever generated that output.
It's not overwhelming enough to be obvious to others at this point (I did ask a few people "hey, uh, do I seem smarter to you in the past couple years?" and they said "a bit maybe, but, like not obviously? But I don't know that I would have really noticed"). But, I am subjectively fairly sure I've seen real progress here.
Here, at least, is my self-story, make of it what you will.
14 years ago, thinking strategically was generally hard for me (5 minutes of trying to think about a chess board or complex problem would give me a headache). I also didn't respond to crises very well in the moment. For my first several years in the rationalist community, I felt like I got dumber, because I learned the habit of "go ask the smarter people around me whenever couldn't figure something out."
8 years ago, I began "thinking for real", for various reasons. One piece of that was doing the Tuning Your Cognitive Strategies exercise for the first time, and then sporadically practicing at the skill "notice my thoughts as they're happening, and notice when particularly good thoughts are happening."
6 years ago, a smart colleague I respected did tell me "hey, you seem kinda smarter than you used to." (They brought this up in response to some comments of mine that made it a more reasonable thing to say)
More recently, I've noticed at the workshops I've run, that although there are people around who are, in many senses, smarter and more knowledgeable than me, they found certain types of metacognitive thoughts more effortful and unnatural than they seemed to me. It was pretty common for me to spend 5 minutes directing my attention at a problem, and having approaches just sort of naturally occur to me, where for some participants they'd have to struggle for 30-60 minutes to get to it.
The way this plays out feels very similar to how it's described in SquirrelInHell's essay here.
But, also, I think the style of thinking here is pretty normal for Lightcone core staff, and people in our nearby network. So this may have more to do with "just generally making a habit of figuring out how to deal with obstacles" that comes up naturally in our work. I think most of us have gotten better at that over the past few years, and most of us don't explicitly do this exercises.
(Jacob Lagerros did explicitly invent and train at the Babble challenge and apply it to problemsolving, which is a different exact mechanism but feels at least adjacent to this exercise, and which I also attribute to improving my own generativity. Maybe that's a better exercise than this one, though it's at least a point towards "deliberately practice generativity." During the pandemic, I tried out a "Babble and Tune" variant that combined the two exercises, which didn't obviously work at the time but I think is essentially what I actually do most of the time)
Most recently, in November, I spent... basically two whole weeks thinking strategically ~all the time, and I did eventually get a headache that lasted for days, but only after 1.5 weeks instead of 5 minutes.
When I asked John Wentworth recently if I seemed smarter to him, he said "not obviously, but I'm not sure I'd notice." I said "fair, but, though I (somewhat defensively) wanna flag – a few years ago when you first met/read my stuff, most of what I was writing was basically summarizing/distilling the work of other people, and nowadays most of what you hear me say is more like "original work.")
So, idk, that's my story. Take the self-report with a grain of salt.
The Cautionary Tale
It's annoying that whenever I bring up this technique, I either need to disclaim "uh, the person who invented this later killed themselve," or, not disclaim it but then have someone else bring it up.
I do think there's an important cautionary tale there, but it's a bit subtler. Copying my warning from Subskills of "Listening to Wisdom":
I believing Tuning Your Cognitive Strategies is not dangerous in a way that was causal in that suicide[4], except that it's a kind of a gateway drug into weird metacognitive practices and then you might find yourself doing weirder shit that either explicitly hurts you or subtly warps you in a way you don't notice or appreciate.
I think the way SquirrelInHell died was essentially (or, at least, analogous to) absorbing some Tacit Soulful Ideas, which collapsed a psychologically load-bearing belief in a fatal way.[5]
I do think there are people for whom Tuning Your Cognitive Algorithms is overwhelming, and people for whom it disrupts a coping mechanism that depends on not noticing things. If anything feels off while you try it, definitely stop. I think my post Scaffolding for "Noticing Metacognition" presents it in a way that probably helps the people who get overwhelmed but not the people who had a coping mechanism depending on not-noticing-things.
I also think neither of these would result in suicide in the way that happened to SquirrelInHell.
* it's a bit annoying I can't give this my own +9, since I crossposted it, even though I didn't write it.
It seems to me like people here started focusing on the wrong things. People who knew SquirrelInHell know that the suicide was likely caused by SquirrelInHell simply starting out already over the edge, e.g. hardcore obsessive Roko's basilisk research.
The issue at hand with the matter of tuning cognitive strategies is not "does this drive people crazy", it is "does delta reinforcement actually work", because if delta reinforcement actually works, then that is
As in, like, comparable in value to the rest of Lesswrong put together. If this works, even if it only works on 10-25% of people (which Raemon's testimony indicates), then this is basically the world-saving nearterm human intelligence augmentation (which yud wants to scale).
Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies. High-output passive thinking, and fun downhill thinking, have immense potential to set the world up so that someone, somewhere, eventually thinks of a solution to the world's most pressing problems.
This is not something to sleep on.
Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies
I don't think that's true. I'd independently intuited my way into something like this post, and I suspect that a lot of people successfully doing high-impact cognitive work likewise stumble their way into something like this technique. Perhaps not consciously, nor at the full scale this post describes, but well enough that explicitly adopting it will only lead to marginal further improvements.
Which is the case for a lot of LW-style rationality techniques, I think. Most people who can use them and would receive benefits from using them would've developed them on their own eventually. Consuming LW content just speeds this process up.
So this sort of thing is useful at the individual level, but in most cases, you ain't "beating the market" with this — you just do well. And a hypothetical wide-scale adoption would lead to a modest elevation of the "sanity waterline", but not any sort of cognitive revolution (second-order effects aside).
Gahhhh I've been waiting for the rest of BeWellTuned for a while now. I was hoping it was held up for a happy reason, like the author being busy with work they found important. :(
I grabbed a personal copy. You can use wget --recursive --level=inf --convert-links --page-requisites --wait=1 "http://bewelltuned.com/"
to do so. This will not overload the website, both because the total number of pages is small and because it waits a bit in-between each page. I really wanted to go through this next year and don't want to lose the ability to.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
The blogpost author (SquirrelInHell on LessWrong) died awhile ago. I'm not sure who's currently paying for their website or how long it'll be up. I don't have the rights to this, but decided it was important enough to have on LessWrong that I decided to copy-paste this post and... I dunno, own whatever karmic debt I incur.
This is possibly my single-favorite rationality technique. The first day I tried this I immediately ended up teaching myself a valuable rationality-life-lesson due to the feedback loop it created. When I teach this technique at small workshops, typically ~25% of people go "oh wow that was immediately helpful." I haven't gotten as much value out of it as SquirrelInHell suggests (i.e. it's sometimes effortful to think, and they claim if you're doing it right it basically shouldn't be), but I also haven't really sat and trained it deliberately in-depth, and meanwhile I've gotten value from it each time I try it.
Text of original article:
Tuning Your Cognitive Strategies
What do you get out of it?
The good.
Better returns on thinking time.
Your cognition is much more powerful than just the part you have conscious access to, and it's crucial to make good use of it.
A small tweak to how your brain processes information in general is worth more than a big upgrade to your conscious repository of cognitive tricks.
Goal-oriented thinking.
When working on real-life problems, your peak performance matters less than the ability to simply think useful thoughts at all.
For example, if your current top priority is "start my own company", but you keep having insights about "what I'll say to my current boss when I finally quit"... that's maybe not the best way to make progress.
Improved ability to fix cognitive biases.
To the extent that other approaches work, it's because they manage to change your cognitive strategies. It's much easier when you know what you are doing.
More creativity and good ideas just "popping into your head".
There's no magic to it! Once you understand how the process works, it can be optimized for any purpose you choose.
Less anxiety about performing well in cognitive endeavors.
Once you realize exactly what is and what isn't under your conscious control, you stop beating yourself about not doing the impossible.
The bad.
Uncanny valley.
Most people already have a thinking style built on top of excessive conscious cognitive effort.
This often involves relying on side-effects of verbal and conscious thoughts, while mistakenly assigning the full credit for results to those effortful thoughts.
When you already have some conscious/verbal thoughts, it is tempting to imagine they are the only result of your thinking, and then try to pick up from there. But this is limiting, because the most power is in whatever generated that output.
As you tune your cognitive strategies you're likely to lose that thinking style.
While rebuilding from better foundations is certainly a good idea long-term, you'll probably need to slow down and re-learn some old tricks in a new framework.
Control anxiety.
Having good quality thinking happen effortlessly and automatically is great... unless you are a control freak, in which case you should Tune Your Emotional Processing before even reading this page.
How to tell if you have it?
Note: everyone has cognitive strategies, and challenging yourself with intellectual activity tends to improve them (e.g. mathematicians tend to be very good at a certain specific class of strategies). However, it is very unlikely that you have reached your full potential by blind gradient descent.
You know how to think without "trying hard".
The cost you pay for high quality thinking is mostly time, which you know needs to be free from other concerns.
You definitely don't pay the cost in effort or willpower.
Your thoughts don't get "stuck" when you most need them.
You can recognize and deal with every situation in which your mind stops generating useful output, whether it's because of going blank, spinning in circles, or going off into fantasy lands.
There's a constant stream of good ideas occurring to you.
If your brain is well tuned, it is going to produce useful output whenever it is feeling fresh and has a spare minute or two.
How does it work?
Consider this metaphor:
Imagine your mind as a giant bubbling cauldron full of "thoughts", including "feelings", "ideas", "words", "concepts", "memories", etc.
Some of those "thoughts" rise to the top of the cauldron, and get picked up by your conscious attention.
If the conscious "you" is like a cook standing over the cauldron, then the cook has only a very small spoon at their disposal. They can only taste whatever has bubbled to the surface.
Your creativity and thinking power come from the full depth of the cauldron.
The rules of how thoughts interact and form new thoughts are the same, regardless of whether those thoughts are conscious or not.
When you don't like whatever has risen up to the top of the cauldron, the last thing you want is to try to "fix it".
You only have access to the topmost layer, so it would be hopelessly ineffective anyway.
But it's much worse than that - by attempting to "fix" your cognition, you stop being able to see how it works.
How well your cognition works is shown not by what thoughts you have at the moment, but rather by the pattern of how one or more thoughts combine into a new thought ("cognitive strategy").
Instead, you want to learn as much as possible about the differences ("deltas") between each thought and the next, as they occur to you.
Your brain already has the ability to update its cognitive strategies (this is called "meta-cognitive reinforcement learning"). However, the usual mechanism works with unnecessary levels of indirection, as in:
Cognitive strategy -> Thought -> Action -> Reward or punishment
You get rewarded or punished for what you do (as measured by your brain's chemical responses). Good thoughts are more likely to be followed by good actions. Good cognitive strategies are more likely to generate good thoughts. On average, your brain will slowly update its cognitive strategies in the right direction.
Cognitive strategy -> Thought -> Reward or punishment
You have learned to be happy or unhappy about having certain ideas, even when you don't yet know how they apply to the real world. Now your brain gets rewarded or punished for thoughts, and on average good thoughts are more likely to be generated by good cognitive strategies. Your brain can update cognitive strategies faster, according to heuristics about what makes ideas "good".
However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!):
Cognitive strategy -> Reward or punishment
You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts.
Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience.
How to learn it?
Note: awareness is a muscle. Time spent trying to see your thoughts more clearly is time well spent, regardless of the degree to which you succeed at getting any specific results.
Step 1: basic sanity checks.
For practice, we'll start with improving some simple local efficiency heuristics. They definitely aren't the final goal, but will later be useful regardless of what goal you have.
Pick a small problem, question or thinking puzzle of any kind.
It's best to use something that you think you can solve in at most a few minutes, and which makes it easy to see how well you are doing.
Choose something outside of your area of expertise.
In areas where you have a lot of experience, your thought process will be faster and more automatic.
Beware of "school trauma": think about whatever you want to think about, not things someone else would like you to think about.
If you bend to external pressure, you'll just reinforce the pathological pattern that thinking tools are your enemies, because they limit your freedom.
If you don't have any ideas, you can always pick "picking a puzzle" as your puzzle.
Notice a thought chain.
Load the puzzle into your memory, and let go.
Instead of focusing on solving the puzzle, focus on the question "where do my thoughts go when this puzzle enters my attention"?
At minimum, try to notice a sequence of two thoughts (the shortest possible "chain"): the initial question you asked yourself, and the first thought that occurred to you afterwards.
It's very important to focus on what feels like very quick, atomic transitions. Do not wait until you have a full word or sentence formed in your mind!
Aim for sub-second timescales. In fact, you can easily have a chain of 5 or more conscious thoughts in one second. If you think you can't, you're just missing skill in noticing it.
Repeat as necessary to get a clear read - just trying to do this is already valuable cognitive training.
Definitely change the topic when it gets too boring, which is when you no longer expect to be surprised by what you notice about your thoughts.
Example: just now, my thoughts:
looking at the typed word "Example:" -> wanting to know what to type next -> flash of dread at not having anything prepared -> noticing that flash of dread -> noticing that I noticed it -> looking at the whole thought chain so far -> noticing I executed the technique -> realizing I can use this as an example -> picking a grammatic form to describe it -> ...
Extract the pattern of "deltas".
After you become aware of at least one micro-scale thought chain, you can reflect on the principles that generated it.
This probably shouldn't be a very detailed or time-consuming analysis - your advantage here is that you have lots of raw data, so you don't need to be very parsimonious with it.
In fact, the act of reflecting on a thought chain will necessarily generate dozens of a new thought chains. It's basically impossible to run out of data to reflect on and learn from.
Think which "deltas" are doing good work for you, and which aren't.
This will send a signal to your brain to learn and update the corresponding cognitive strategies.
Do not try to assume forceful control over what you think! This applies both to thoughts and "deltas".
All you ever need to do is notice useful deltas, and have that little "oh, nice!" reaction. That's it. Really.
The delta which moves you into noticing your deltas is very useful. Give it the reward it deserves!
Example 1:
After someone asked me to add examples here, my thought chain was roughly:
feeling of not wanting to bother -> checking reasons to do it -> noticing a cached thought that it's good to give examples -> doubting if this makes sense -> what happens if I just stop doing it -> intuition that this would be bad for BWT clarity -> flash of reasons why I care about writing BWT in the first place -> wanting to make a quick decision -> deciding to add an example -> ...
The deltas "planning X -> question reasons to do X" (appeared twice) and "suspicious belief -> try to negate it" seem useful.
There was also a pair of deltas "reasons feel shaky -> investigate" and "reasons feel solid -> use cache" which made me go off on a tangent once, but not in the other cases.
This means I'm also tracking in the background what it means for reasons to feel "solid", and already have cognitive strategies in place which update this information. This is all very useful.
Example 2:
On the other hand, a large amount of low-hanging fruit can be extracted from noticing deltas which are obviously broken, like in this thought chain:
blank mind -> noticing having a blank mind -> verbal thought "my mind is blank" -> feeling of despair -> blank mind -> ...
More examples of useful cognitive strategies, and common low hanging fruit:
If you hit an impasse (no new useful thoughts), relax and let your mind wander to related but different topics.
If your mind wanders too much, check why you even care about the problem.
If you think the same thought again, change the topic.
If you know what you are going to think, think something else.
If you think with lots of effort, remember it's useless and just watch your thoughts happen.
If you don't know in which direction to think, pick whatever seems fun.
Step 2: make sure to win.
Notice thought chains you generate naturally as you go about your life.
While local efficiency (not getting stuck etc.) is useful, it hardly has the power to change how you play the game. The biggest challenge in an open environment is knowing what to focus on in the first place.
This means that more than anything, you need to learn cognitive strategies that connect you to your goals, and means of achieving them.
For example, you can notice thought chains when you:
choose the next task to do,
do better or worse than expected,
plan your day or week,
process emotions,
change the topic in conversations,
accept or reject offers.
It's recommended to do it without setting up external reminders.
A far better solution is to reinforce cognitive strategies which would make you naturally remember at the right times.
E.g. one or two straightforward deltas can take you from "feeling of mild dissatisfaction with decision" to "wanting to know how to think better", from where it's close to remembering to reflect on your thought chains.
Get the deltas.
Reconstruct as much as you can of how your mind went there. In real life, you are not restricted to the micro scale.
Try to identify both low-level and high-level patterns, such as key insights, emotions, changes of topic, and inspiration.
How does your emotional state influence your deltas?
You probably have a different cognitive style when excited, angry, happy, anxious, overwhelmed, content, scared, restless etc.
Keep your goals in mind.
Warning: this is definitely not about "policing" your thinking. You should never try to put restrictions on the content and style of your thoughts.
Do not use this under pressure (when someone or something tells you what goals you should have).
Also do not fall into the trap of rejecting vague, dreamy thoughts as worthless.
The best use of your brain when tired is probably to let it unwind and think relaxed, creative thoughts.
How well have these particular deltas performed in the past?
This amounts to maintaining a rough "track record" for all of them.
What are they optimized to do?
You'll often find goals which you don't necessarily feel proud of, e.g. feel better, impress someone (who?), prove something to yourself.
However, trying to attack those goals would be a terrible mistake - they are there as a result of your real preferences.
If you are surprised by this, it just means you didn't know enough about yourself.
You need to understand where the patterns come from, and what you really want to achieve in any given situation (see also Tune Your Emotional Processing).
How well do you expect to do if you continue the current trend?
What would it be like to do better than that?
Further Progress
Turn the skill on itself.
Reinforce cognitive strategies that will help you with reinforcing cognitive strategies, and finding better ways to reinforce cognitive strategies.
The skill will then quickly bootstrap itself into your most powerful and general thinking tool.