Worth noting that the reason SquirrelInHell is dead is that they committed suicide after becoming mentally unstable, likely in part due to experimentation with exotic self-modification techniques. This one in particular seems fine AFAICT, but, ya know, caveat utilitor.
This seems reasonable to note; at the same time, I think that a lot of people who end up badly after experimenting with exotic self-modification techniques do so despite rather than because of the techniques.
This technique seems best if your problem is that your thoughts tend to often go down loopy, unproductive, distressing paths, in a way that you can self-diagnose with confidence. Which is totally a real thing! I used to find my brain making up imaginary offenses people had committed against me, and I would feel angry or vindictive for a moment. Fortunately I developed a thought pattern that immediately just notes “… and that NEVER ACTUALLY HAPPENED,” and then I move on from the moment. That’s a situation where it’s really easy to notice a bad thought pattern and change it, cutting out any real world action. And once I’d done it a couple times, I started noticing this as an overall cognitive strategy.
Another example is from my work as an engineer. During my first year or so doing research, I noticed several bad patterns of thought and behavior: throwing things out prematurely when I’d make a mistake, doing overly complex mental math, and trying to emergency correct mistakes rather than going to my desk and working out an actual plan of solution.
But in these cases, while “noticing my thoughts” was key to the solution, because it interrupted a bad pattern of behavior, it was noting the bad outcome, then working backwards to a specific root cause that got me there. Continuously monitoring my stream of thoughts was not part of this process. It seems like a technique of continuous thought-monitoring would be more important if the problem you were having was with your thoughts themselves. If your problem manifests as behavior, then paying attention to the stream of behavior and figuring out the root cause seems best.
Yeah, I considered explicitly leaving that note at the beginning but felt like this was just sufficiently different from the thing that led to their suicide that adding "WARNING! BUT ALSO I'M NOT THAT WORRIED?" didn't seem overall worth it.
Romeosteven's comment updates me a bit, though my current guess is this is still a fairly different reference class of problem (and the post comes with it's own warnings about the thing romeo is pointing at, assuming I understand it properly)
Man, it does make me sad that whenever I bring up this technique, there’s an obligatory version of this conversation.
That's understandable. But it does seem like the sort of thing I'd want to hear about before trying such a technique. Hopefully people can take it for what it's worth.(i.e I don't think we should automatically discount such techniques or anything)
I think that's somewhat reasonable in this case, but, want to flag that it should be possible at somepoint to reach an epistemic state where you can say "okay, yeah, it was mostly coincidence, or at least not relevant, that this happened to this person." Like, if someone invented a car, and then used the car to commit suicide driving over a cliff, you might go "holy shit maybe I should be worried about cars and suicide?", and if you didn't know much about cars maybe this would be a reasonable thing to worry about at first. But, like, it shouldn't be the case that forever after whenever someone sells a car they warn you that the guy who invented cars used them to commit suicide. It's privileging a hypothesis.
I think in this case it's less crazy than in the car case to worry about that, but, I do want to push back against the impulse to always have a disclaimer here.
In cases like this I strongly prefer to be given the facts (or at least pointed toward them) and allowed to make my own judgment as to how relevant they are.
Whether you choose to join the conversation and present the argument for their irrelevance is up to you, but sharing all the facts that your audience might consider important, rather than deciding for them that some apparently-relevant ones are best left unsaid, is IMO more respectful and reduces the risk of doing preventable harm in cases where your judgment is mistaken.
In the car case I think it's obvious that car usage is not causally upstream of suicidality. If the inventor of the car died in a car accident, I do think that would be a relevant data point about the safety of cars, albeit not one that needs to be brought up every time. And in the real world, we do pretty universally talk about car crashes and how to avoid them when we're teaching people to drive. From that perspective romeosteven's comment is probably better and mine just got more upvotes because of the lurid details. (although, tail risks are important. And I think there's a way in which the author's personality can get imprinted in a text which makes the anecdote slightly more relevant than in the car case)
Is your worry more about "maybe this technique is more dangerous than it looks?" or "maybe people will follow up on this by generally following SquirrelInHell's footsteps, and maybe not all those footsteps are safe?"
More the latter. Or more like, doing things like this technique too much/too hard could be dangerous.
I think that might be true, but, at that level, I think it kinda makes more sense to put the warning over, like, the entirety of rationality techniques, and I think singling ones that SquirrelInHell wrote up doesn't actually seem like the right abstraction.
Like, I do generally think there's a failure mode to fall into here. I don't think SquirrelInHell is the only person to have fallen into it.
This post does seem like it warrants some specific warnings (which the original post already included). But I think those warnings are mostly unrelated to what ultimately went wrong.
Source/evidence? I believe you but this seems worth checking.
Worth noting that both this and the fixing the motor cortex skill they advocate are very closely related to traditional buddhist insight practices and that without supporting emotional integration (Tune Your Emotional Processing, with Focusing as the particular version that Squirrelinhell advocated though a variety of self therapy modalities can work) it can be destabilizing.
I'm interested in more details about the failure modes to watch out for here. i.e. what sort of things might you notice happening to you if you were en route to being destabilized?
The post does explicitly warn about this, but I happened to a) already have some flavor of focusing by the time I started, and b) never actually ran at it that hard, so, I might still be underestimating how worried to be about it despite the warnings.
One possible issue that comes to mind is that if you start paying more attention to the low-level movements of your thoughts, you might start noticing thoughts that parts of you get triggered by, e.g. if they feel like particular kinds of thoughts are shameful to have. One concrete failure mode that I think many rationalists would be susceptible to, would be to notice something like
blank mind -> noticing having a blank mind -> verbal thought "my mind is blank" -> feeling of despair -> blank mind -> ...
and then feeling additional despair and shame over your mind being stuck in an unproductive cycle and feeling that you should be able to do better. That may then create another layer of shame and despair on top of the original one. Although the original instructions say that you shouldn't use this to police your mind, getting triggered in this way may create a compulsion to do so anyway.
Another could be mysterious feelings of dread and feeling bad, if you started noticing various thoughts/emotions that parts of you had been trying to block. Though I would expect that the most natural consequence of that would be you just losing the motivation to use the technique pretty rapidly, with it becoming another of those "that felt really useful but for some reason I don't feel any interest in doing it anymore, shrug" things.
I think the main risk there would be if you had used this technique extensively enough to build up an increased introspective awareness that was harmless at first but then started catching more of whatever blocked trauma you had and had by that point been built up sufficiently that just stopping the practice wasn't enough to bring it down anymore. That kind of a scenario would be similar to the cases where people start getting trauma symptoms from doing mindfulness practices; if one has already tried that kind of a thing before and hasn't felt bad, then it might be an indication (on top of the base rate, which I think is reasonably low) that it's low-risk.
There's also the fact that the thought processes themselves may be protecting you from various traumas or doing other subconscious things for you. Since this tuning process isn't based on introspection but on conscious judging of your subconscious processes, you could accidentally tune yourself away from emotionally load-bearing coping strategies.
I meant that emotional integration (like focusing) is helpful for avoiding destabilization.
I would say the signs are the normal sort you 'd see in mental health breakdowns:
Depression, social withdrawal
Hostility or suspiciousness, extreme reaction to criticism
Deterioration of personal hygiene
Flat, expressionless affect
Inability to cry or express joy or inappropriate laughter or crying
Oversleeping or insomnia; forgetful, unable to concentrate
Odd or irrational statements; seeming difficulty with communicating in a normal way
One of my "responsible use" notes in "How To Observe Abstract Objects" seems directly relevant here:
However, a few people seem to have an overall cognitive strategy that crucially depends on not looking at things too closely (or something like that), and this is actively bad for some of them. If you try this for a minute and hate it, especially in an “I feel like I’m going crazy” kind of way, I do not recommend continuing. Go touch some grass instead. I’ve never seen this cause damage in just a few minutes (or at all, as far as I can tell), but I do think there’s a danger of dismantling somebody’s central coping mechanism if they push past their own red flags about it over and over again, or for a whole hour at once.
The "notice something new" exercise in that post is extremely similar to "pay attention to the delta between thoughts". Seems to me that it's directing attention toward the same psychological event type, just not in the context of attempting to solve a problem.
As of writing, I have spent about four months experimenting with the Tune Your Cognitive Strategies (TYCS) method and I haven't gotten any visible direct benefits out of it.
Some of the indirect benefits I've gotten:
The biggest thing I've learned is that better introspective ability and awareness seems to be the most load-bearing skill underlying TYCS. I'm less enthusiastic about the notion that you can 'notice your cognitive deltas' in real-time almost all the time -- this seems quite costly.
Note that Eliezer has also described that he does something similar. And more interestingly, it seems like Eliezer prefers to invest in what I would call 'incremental optimization of thought' over 'fundamental debugging':
EY: Your annual reminder that you don't need to resolve your issues, you don't need to deal with your emotional baggage, you don't need to process your trauma, you don't need to confront your past, you don't need to figure yourself out, you can just go ahead and do the thing.
On one hand, you could try to use TYCS or Eliezer's method to reduce the cognitive work required to think about something. On the other hand, you could try to use integration-based methods to solve what I would consider 'fundamental issues' or deeper issues. The latter feels like focusing on the cognitive equivalent of crucial considerations, the the former feels like incremental improvements.
And well, Eliezer has seemed to be depressed for quite a while now, and Maia Pasek killed herself. Both of these things I notice seem like evidence for my hypothesis that investing in incremental optimization of the sort that is involved in TYCS and Eliezer's method seems less valuable than the fundamental debugging that is involved in integration / parts-work mental techniques, given scarce cognitive resources.
For the near future, I plan to experiment with and use parts-work mental techniques, and will pause my experimentation and exploration of TYCS and TYCS-like techniques. I expect that there may be a point at which one has a sufficiently integrated mind such that they can switch to mainly investing in TYCS-like techniques, which means I'll resume looking into these techniques in the future.
If you are willing to share, can you say more about what got you into this line of investigation, and what you were hoping to get out of it?
For my part, I don't feel like I have many issues/baggage/trauma, so while some of the "fundamental debugging" techniques discussed around here (like IFS or meditation) seem kind of interesting, I don't feel too compelled to dive in. Whereas, techniques like TYCS or jhana meditation seem more intriguing, as potential "power ups" from a baseline-fine state.
So I'm wondering if your baseline is more like mine, and you ended up finding fundamental debugging valuable anyway.
I'm not mesaoptimizer, but, fyi my case is "I totally didn't find IFS type stuff very useful for years, and the one day I just suddenly needed it, or at least found myself shaped very differently such that it felt promising." (see My "2.9 trauma limit")
If you are willing to share, can you say more about what got you into this line of investigation, and what you were hoping to get out of it?
Burnt out after almost an year of focusing on alignment research. I wanted to take a break from alignment-ey stuff and also desired to systematically fix the root causes behind the fact that I hit what I considered burn-out.
I don’t feel like I have many issues/baggage/trauma
I felt similar when I began this, and my motivation was not to 'fix issues' in myself but more "hey I have explicitly decided to take a break and have fun and TYCS seems interesting let's experiment with it for a while, I can afford to do so".
I think it's worth sharing here some details about SquirrelInHell's suicide, specifically to point out to new people that Cognitive Tuning was not what killed SquirrelInHell.
This comment is from Slimepriestess, who is a friendly former-Zizian. I wouldn't necessarily trust 100% of everything said by a former Zizian (and who should definitely not be treated as a pariah). But it's pretty well known that SquirrelInHell was doing a ton of over-the-top shit at once (e.g. simultaneously attempting to use dolphin-like sleep deprivation to turn half of their brain into Lawful Evil and the other half into Transgender Good), and was simultaneously hanging around a bunch of violent and dangerous people, and they were all doing hardcore Roko's Basilisk research.
imo, Maia was trans and the components of her mind (the alter(s) they debucketed into "Shine") saw the body was physically male and decided that the decision-theoretically correct thing to do was to basically ignore being trans in favor of maximizing influence to save the world. Choosing to transition was pitted against being trans because of the cultural oppression against queers. I've run into this attitude among rationalist queers numerous times independently from Ziz and "I can't transition that will stop me from being a good EA" seems troubling common sentiment.
Prior to getting involved with Ziz, the "Shine" half of her personality had basically been running her system on an adversarial 'we must act or else' fear response loop around saving the multiverse from evil using timeless decision theory in order to brute force the subjunctive evolution of the multiverse.
So Ziz and Squirrel's start interacting, and at that point the "Maia" parts of her had basically been like, traumatized into submission and dissociation, and Ziz intentionally stirs up all those dissociated pieces and draws the realization that Maia is trans to the surface. This caused a spiraling optimization priority conflict between two factions that ziz had empowered the contradictory validity of by helping them reify themselves and define the terms of their conflict in her zero sum black and white good and evil framework.
But Maia didn't kill them, Shine killed them. I have multiple references that corroborate that. The "beat Maia into submission and then save the world" protocol that they using cooked out all this low level suicidality and "i need to escape, please where is the exit how do i decision-theoretically justify quitting the game?" type feelings of hopelessness and entrapment. The only "exit" that could get them out of their sense of horrifying heroic responsibility was by dying so Shine found a "decision theoretic justification" to kill them and did. "Squirrel's doom" isn't just "interhemispheric conflict" if anything it's much more specific, it's the specific interaction of:
"i must act or the world will burn. There is no room for anything less than full optimization pressure and utilitarian consequentialism"
vs
"i am a creature that exists in a body. I have needs and desires and want to be happy and feel safe"
This is a very common EA brainworm to have and I know lots of EAs who have folded themselves into pretzels around this sort of internal friction. Ziz didn't create Squirrel's internal conflict she just encouraged the "good" Shine half to adversarially bully the evil "Maia" half more and more, escalating the conflict to lethality.
Generally, I think people should be deferring to Raemon on the question of "is Cognitive Tuning safe?" and should, at minimum, message him to get his side of the story. This situation is a really big deal; if Cognitive Tuning works, that's successful human intelligence augmentation, that is world-saving shit. Cognitive Tuning alone could become an entire field of intelligence augmentation, AND something that anyone with average intelligence can contribute heavily towards, since having a more typical mind will yield more insights that can be picked up and worked with by other people with more typical minds).
Another thing I notice after a few years of using this:
The OP says:
- Your brain already has the ability to update its cognitive strategies (this is called "meta-cognitive reinforcement learning"). However, the usual mechanism works with unnecessary levels of indirection, as in:
- Cognitive strategy -> Thought -> Action -> Reward or punishment
- You get rewarded or punished for what you do (as measured by your brain's chemical responses). Good thoughts are more likely to be followed by good actions. Good cognitive strategies are more likely to generate good thoughts. On average, your brain will slowly update its cognitive strategies in the right direction.
- Cognitive strategy -> Thought -> Reward or punishment
- You have learned to be happy or unhappy about having certain ideas, even when you don't yet know how they apply to the real world. Now your brain gets rewarded or punished for thoughts, and on average good thoughts are more likely to be generated by good cognitive strategies. Your brain can update cognitive strategies faster, according to heuristics about what makes ideas "good".
- However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!):
- Cognitive strategy -> Reward or punishment
- You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts.
- Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience.
I think the author thinks of this as the primary insight here (i.e. getting to: "Cognitive strategy -> reward/punishment"). And... I'll be honest, I think this works and it makes sense to me, but it doesn't work so obviously that I'm like "yes this underlying theory definitely checked out."
But what I think is both more obvious, and still a useful stepping stone, is transitioning more from "Cognitive strategy -> Thought -> Action -> Reward or punishment" to "Cognitive strategy -> Thought -> Reward or punishment". A lot of my thoughts are obviously dumb (or useful) upon first glance. And shifting how much of my feedback loop happened within ~3 seconds vs longer timescales still seems very helpful.
Does anyone who knew SquirrelInHell know the subskills in the skill tree they never got around to writing?
EDIT: To clarify, are there any known skills which are equivalent to the Red subskills in BWT's skill tree? I am very impressed with the exposition on BWT, and would guess the remaining skills were just as high value. Perhaps more than I'd naively guess, if there's some synergy between them. If you think you know them, please speak out so we can get the complete BWT skillset.
I didn't know them and can only speak to how I did the tuning ontology thing. For about 2 weeks, I noted any time I was chunking reasoning using concepts. Many of them familiar LW concepts, and lots of others from philosophy, econ, law, common sense sayings, and some of my own that I did or didn't have names for. This took a bit of practice but wasn't that hard to train a little 'noticer' for. After a while, the pace of new concepts being added to the list started to slow down a lot. This was when I had around 250 concepts. I then played around with the ontology of this list, chunking it different ways (temporal, provenance, natural seeming clusters of related concepts, domain of usefulness, etc.). After doing this for a bit it felt like I was able to get some compressions I didn't have before and overall my thinking felt cleaner than before. Separately, I also spent some time explicitly trying to compress concepts into as pithy as possible handles using visual metaphors and other creativity techniques to help. This also felt like it cleaned things up. Compression helps with memory because chunking is how we use working memory for anything more complicated than atomic bits of info. Augmenting memory also relied on tracking very closely whether or not a given representation (such as notes, drawing etc.) was actually making it easier to think or was just hitting some other easily goodharted metric, like making me feel more organized etc.
With regard to 'tracking reality with beliefs' the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).
With regard to 'tracking reality with beliefs' the most important thing I ever noticed afaict is whether or not my beliefs 1. have fewer degrees of freedom than reality and thus have any explanatory power at all and avoid overfitting, 2. vary with reality in a way that is oriented towards causal models/intervention points that can easily be tested (vs abstraction towers).
This seems like a potentially quite helpful concept to me.
I'd be interested in more details of how you go about checking for degrees of freedom.
I think when I do this sort of sanity-checking for myself, things I sometimes do include "wait, why do I believe this in the first place?" and "consider the world where the opposite is true, how would I know?" but those seem like different mental motions.
Easiest is a fictional dialog between a pro and anti position person. The anti person brings counter evidence and then gets to see how the pro position responds. If they respond by remapping the moving parts of the model in a different way, that indicates extra degrees of freedom. Then you can have an easier time noticing when you are doing this same move, ie back peddling and trying to 'save' a position when someone gives you push back on it.
I think that list would be very helpful for me.
Can you form a representative sample of your "list"? Or send the whole thing, if you have it written down.
partially exists here, but very little explanation https://conceptspace.fandom.com/wiki/List_of_Lists_of_Concepts
This is neat.
Did you write all that or who did?
(EDIT:) This taxonomy seems especially nice. Basically each point there would need examples and exercises and then that would be a pretty cool problem solving toolkit training program.
Thanks, I wrote it and found the process of recording my thoughts and organizing them to be helpful.
My unedited notes while reading this post, including an initial exercise log:
"Your cognition is much more powerful than just the part you have conscious access to, and it's crucial to make good use of it."
heck yeah
"A small tweak to how your brain processes information in general is worth more than a big upgrade to your conscious repository of cognitive tricks."
"More creativity and good ideas just "popping into your head"."
"Once you realize exactly what is and what isn't under your conscious control, you stop beating yourself about not doing the impossible."
What does it mean to "tune" your "cognitive strategies"?
"Having good quality thinking happen effortlessly and automatically is great... unless you are a control freak, in which case you should Tune Your Emotional Processing before even reading this page."
"How to tell if you have it?"
"When you don't like whatever has risen up to the top of the cauldron, the last thing you want is to try to "fix it". You only have access to the topmost layer, so it would be hopelessly ineffective anyway. But it's much worse than that - by attempting to "fix" your cognition, you stop being able to see how it works. How well your cognition works is shown not by what thoughts you have at the moment, but rather by the pattern of how one or more thoughts combine into a new thought ("cognitive strategy"). Instead, you want to learn as much as possible about the differences ("deltas") between each thought and the next, as they occur to you."
meta: i appear to be halfway through the post and part of me is still waiting for the post to start because it's happening in the form of bullet points, which apparently i categorize as "part of an introduction, not the body of a post". but actually i think this just is the post.
"However, by carefully looking at the "deltas" between conscious thoughts, we can get rid of the last remaining level of indirection (this is the key insight of this whole page!): Cognitive strategy -> Reward or punishment You have learned to perceive your cognitive strategies as they happen, and developed some heuristics that tell you whether they are good or bad. Now your brain can update cognitive strategies immediately, and do it regardless of the topic of your thoughts. Even when you generate a useless idea from another useless idea, you can still track whether the cognitive strategy behind it was sound, and learn from the experience."
I just reread this.
Since writing this post I’ve tried to do this in workshops a few time. People struggled a lot with it. One thing I noticed here was Logan is pretty skilled at the related subskills here, and it still requires a lot of attention and iteration to grok it and get the hang of it.
I’m not sure whether I grokked the skill or not when I first did it. I think I was doing a cruder thing that was still really helpful. I’m honestly still not sure whether the thing with the deltas is helpful over the raw stream of thoughts.
After iterating in workshops a bit, I now start people off with ‘load the puzzle up, and then notice the very first thing that pops into your mind and then stop. And then look at it a bit. And then go back to the puzzle again and notice the first two things that happen in your mind, and stop. And only then go on to observing yourself as you solve the puzzle.
Previous discussion: https://www.lesswrong.com/posts/hGtBH7SJy6Y2SmAj6/tune-your-cognitive-strategies
Longevity-wise, https://squirrelinhell.blogspot.com/ should be up indefinitely since AFAIK, Blogspot/Blogger has no nasty deletion policies (although I have not checked specifically, they are one of the oldest blog hosts on the Internet, and apparently they are considered safe from Google axing because they are used internally so much for Google official posting). http://bewelltuned.com/ seems to duplicate a lot of the content, and the copyright date suggests most of it has been there for at least several years, and it looks easily crawled, so it should be well-archived.
The way I've personally used this technique/practice is to have a lapscreen, with two pages side by side – one as a notebook where I can jot thoughts down, and one with whatever puzzle I'm trying to solve. (I found brilliant.org to be a good source of puzzles)
I try to jot thoughts down as I have them (often with very rough notes that only make sense to me since trying to write down too much would slow down the process too much)
The post emphasizes noticing thoughts at the sub-second level. Obviously, writing out a focus-handle for 5 different thoughts in the space of a second isn't practical. But what I do here is often let myself have a few thoughts/impulses in a row, then go back and try to notice/remember them all, and then write them down after-the-fact in an attempt to crystallize them and reinfoce the noticing process.
Do you think having a well-defined puzzle (like a math problem) is a better way to make the usefulness of this technique clear?
A lot of what I work on are more open-ended questions like trying to remember how techniques work or concepts are about (ie ANOVA). In these cases, the process is more about recalling or reconstructing various insights, definitions and equations, with no clear stopping point. I’m wondering if I’ve been trying to apply this cognitive tuning technique to a problem it’s not well suited for?
I think the technique is relevant to basically all cognition, but working on well-defined problems is useful for the "figure out if it's actually helping" and "fine-tune your approach to ensure you're using it usefully".
(When I use this technique for more open-ended problems I think it's still useful to have two screen-pages open, one of which is still more for rough-unstructured notes and one of which is more for "here's my distillation of my current understanding of the problem."
It seems to me like people here started focusing on the wrong things. People who knew SquirrelInHell know that the suicide was likely caused by SquirrelInHell simply starting out already over the edge, e.g. hardcore obsessive Roko's basilisk research.
The issue at hand with the matter of tuning cognitive strategies is not "does this drive people crazy", it is "does delta reinforcement actually work", because if delta reinforcement actually works, then that is
As in, like, comparable in value to the rest of Lesswrong put together. If this works, even if it only works on 10-25% of people (which Raemon's testimony indicates), then this is basically the world-saving nearterm human intelligence augmentation (which yud wants to scale).
Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies. High-output passive thinking, and fun downhill thinking, have immense potential to set the world up so that someone, somewhere, eventually thinks of a solution to the world's most pressing problems.
This is not something to sleep on.
Everything we have so far, on alignment and macrostrategy, came from human minds that were not really tuning their cognitive strategies
I don't think that's true. I'd independently intuited my way into something like this post, and I suspect that a lot of people successfully doing high-impact cognitive work likewise stumble their way into something like this technique. Perhaps not consciously, nor at the full scale this post describes, but well enough that explicitly adopting it will only lead to marginal further improvements.
Which is the case for a lot of LW-style rationality techniques, I think. Most people who can use them and would receive benefits from using them would've developed them on their own eventually. Consuming LW content just speeds this process up.
So this sort of thing is useful at the individual level, but in most cases, you ain't "beating the market" with this — you just do well. And a hypothetical wide-scale adoption would lead to a modest elevation of the "sanity waterline", but not any sort of cognitive revolution (second-order effects aside).
Gahhhh I've been waiting for the rest of BeWellTuned for a while now. I was hoping it was held up for a happy reason, like the author being busy with work they found important. :(
I grabbed a personal copy. You can use wget --recursive --level=inf --convert-links --page-requisites --wait=1 "http://bewelltuned.com/"
to do so. This will not overload the website, both because the total number of pages is small and because it waits a bit in-between each page. I really wanted to go through this next year and don't want to lose the ability to.
The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
The blogpost author (SquirrelInHell on LessWrong) died awhile ago. I'm not sure who's currently paying for their website or how long it'll be up. I don't have the rights to this, but decided it was important enough to have on LessWrong that I decided to copy-paste this post and... I dunno, own whatever karmic debt I incur.
This is possibly my single-favorite rationality technique. The first day I tried this I immediately ended up teaching myself a valuable rationality-life-lesson due to the feedback loop it created. When I teach this technique at small workshops, typically ~25% of people go "oh wow that was immediately helpful." I haven't gotten as much value out of it as SquirrelInHell suggests (i.e. it's sometimes effortful to think, and they claim if you're doing it right it basically shouldn't be), but I also haven't really sat and trained it deliberately in-depth, and meanwhile I've gotten value from it each time I try it.
Text of original article:
Tuning Your Cognitive Strategies
What do you get out of it?
How to tell if you have it?
Note: everyone has cognitive strategies, and challenging yourself with intellectual activity tends to improve them (e.g. mathematicians tend to be very good at a certain specific class of strategies). However, it is very unlikely that you have reached your full potential by blind gradient descent.
How does it work?
How to learn it?
looking at the typed word "Example:" -> wanting to know what to type next -> flash of dread at not having anything prepared -> noticing that flash of dread -> noticing that I noticed it -> looking at the whole thought chain so far -> noticing I executed the technique -> realizing I can use this as an example -> picking a grammatic form to describe it -> ...
feeling of not wanting to bother -> checking reasons to do it -> noticing a cached thought that it's good to give examples -> doubting if this makes sense -> what happens if I just stop doing it -> intuition that this would be bad for BWT clarity -> flash of reasons why I care about writing BWT in the first place -> wanting to make a quick decision -> deciding to add an example -> ...
blank mind -> noticing having a blank mind -> verbal thought "my mind is blank" -> feeling of despair -> blank mind -> ...
Further Progress