50 min read

A fool learns from their own mistakes
The wise learn from the mistakes of others.

– Otto von Bismarc

 

A problem as old as time: The youth won't listen to your hard-earned wisdom. 

This post is about learning to listen to, and communicate wisdom. It is very long – I considered breaking it up into a sequence, but, each piece felt necessary. I recommend reading slowly and taking breaks.

To begin, here are three illustrative vignettes:

The burnt out grad student

You warn the young grad student "pace yourself, or you'll burn out." The grad student hears "pace yourself, or you'll be kinda tired and unproductive for like a week." They're excited about their work, and/or have internalized authority figures yelling at them if they aren't giving their all. 

They don't pace themselves. They burn out.

The oblivious founder

The young startup/nonprofit founder says "We're going to solve problem X!". You say "oh, uh you know X is real difficult? Like, a lot of talented people tried to solve X and it turned out to be messy and intractable. Solving X is important but I think you need to come at it from a pretty different angle, or have a specific story for why your thing is going to work."

They hear "A bunch of people just didn't really try that hard." If you follow it up with "Look man I really want you to succeed here but I think there are some specific reasons Y that X is hard. And I'd love it if you stood on the shoulders of giants instead of adding another corpse to the pile of people who didn't even make it past the first ring of challenges." 

They hear "okay, we need to spend some time thinking specifically about Y, but our plan is still basically right and we can mostly plow ahead." (Also they think your explanation of Y wasn't very good and probably you're actually just an idiot who doesn't really understand Xs or Ys).

A year later they write a blog post saying "We tried to fix X, alas it was hard", which does not contribute particularly interesting new knowledge to the pursuit of solving X.

The Thinking Physics student

You've given a group of workshop participants some Thinking Physics problems. 

As ~always, some people are like "I can't do this because I don't have enough physics knowledge." You explain that the idea is that they figure out physics from first principles and their existing experience. They look at you skeptically but shrug and start dutifully doing the assignment.

An hour later, they think they probably have the answer. They look at the answer. They are wrong. You ask them to ask themselves "how could I have thought that faster?", and they say "Well, it was impossible. Like I said, I didn't have enough physics knowledge." 

You talk about how the idea is to approach this like the original physicists, who didn't have physics knowledge and needed to figure it out anyway. And, while some of these problems are selected for being counterintuitive, they're also selected for "smart non-physicists can often figure them out in a few hours."

They hear: "The instructor wanted me to do something that would burn out my brain and also not work." (In their defense, their instructor probably didn't actually explain the mental motions very well)

They end up spending quite a lot of effort and attention on loudly reiterating why it was impossible, and ~0 effort on figuring how they could have solved it anyway.

You wish desperately to convey to them an attitude that "impossible problems can be solved." 

Okay, like, not actually impossible problems, obviously those are actually impossible. But, in the moment, when the student is frustrated and the problem is opaque and their working memory is full and they don't have room to distinguish "this feels impossible" from "this is impossible", the thing they actually need to know in their heart is that impossibility can be defeated, sometimes, if you reach beyond your excuses and frames and actually think creatively.


For the past four years (in particular, since overhearing one-too-many oblivious startup founders and seeing their eventual postmortem blogpost), I've been haunted by the question: 

"Man, how do I not be that guy?"

And, more ambitiously: 

"Man, it would be great if somehow humanity unlocked the power to transfer wisdom."

There's been a lot of discussion of how humanity is bottlenecked on transfer of tacit knowledge. Deep expertise has lots of fiddly bits that are hard to compress. Often the only way to transfer it is by pairing individually with a mentor, and watching what they do until you internalize those fiddly bits. 

Sometimes, knowledge is "tacit" because it can be learned visually, but not through writing. There's been a revolution in how people can share this sort of knowledge now via youtube. The military has also explored how you can train tacit knowledge via simulation exercises, that force people to quickly discover the pieces for themselves even though it's hard to operationalize them as a series of deliberate practice steps.[1]

Some of the things called "wisdom" seem to me basically like run-of-the-mill tacit knowledge. But some of the things called "wisdom" has a particular flavor, which I might call "tacit soulful knowledge." It is knowledge whose acquisition requires you to either:

  1. Update some beliefs that are psychologically load-bearing, and/or connected to your meaning making-network.
  2. Or, update some particularly deep beliefs a lot. Either the belief node is tightly intertwined with lots of other beliefs, or maybe it's a single belief but it's just... like, a big one. The magnitude of the update you'll need to make about it is extreme, and you'll keep being tempted to think "okay I got it" when you don't yet got it.

The first scenario is very challenging to update on because it's scary, and you won't necessarily have coping mechanisms to handle the update safely.

The second option is very challenging because it is exhausting and effortful and people don't feel like it. Also, major beliefs like this are often psychologically loadbearing, so have the first problem too.

The other day, I think I got deconfused about all the moving pieces here. 

I think there are ~15 skills you need (in some combination between the tacit-soulful-knowledge Giver and Receiver). They are all difficult skills. Most of the skills individually require a different bit of tacit knowledge or wisdom, as a pre-requisite.

But they're all useful skills you probably eventually want anyway. And, if you have them all, I think you can gain a skill of "Listening to Wisdom." And maybe, you can become someone who more easily learns from the mistakes of others, not merely in a shallow way, but a pretty deep one.

I'm excited about this because it feels on-the-path towards somehow building an engine that transfers wisdom better, at scale. 

I'm also excited because, while I think I have most of the individual subskills, I haven't personally been nearly as good at listening to wisdom as I'd like, and feel traction on trying harder.


Epistemic Status

I came up with this a couple months ago. It's not very battle tested, but I have found it directionally useful. Part II explores the initial test case. I'll say up-front: I don't think I've done anything like a complete Tacit Soulful Knowledge transfer, but I do think I accomplished a few partial transfers that feel novel and valuable to me. 

If nothing else, thinking in this frame has made it clear how high the "skill ceiling" on actually listening is. And it's motivated me to seriously, for real, attempt to listen to what people are trying to say, moreso than I ever had before. I think I've already gotten significant value from that, even if the specific models here turn out to be wrongly framed or overhyped.

But, having spent a fair amount of time on calibration training on particular flavors of new insights, I do feel fairly confident (>65%) that in a year I'll still think this model was obviously useful in some way, even if not necessarily the way I currently imagine. [2]

 

Be careful 

As much as wisdom is great, I am worried about ideas here being misused. 

Cults work partly via rapid tacit-soul-knowledge transfer, in a way that overwhelms and undermines a new recruit's defenses[3]

I think even well intentioned people could hurt themselves, even if they're fairly knowledgeable. The skills required here are nuanced. They are skills many people are particularly likely to have blind spots about being insufficiently good at. 

Not all Tacit Soulful Knowledge is "Wisdom." (See Tacit Soulful Trauma)

The ideas in this post are downstream of Tuning your Cognitive Strategies, an essay by SquirrelInHell. SquirrelInHell, notably, eventually killed themselves. When people find that out, they often express a lot of worry that Tuning Your Cognitive Algorithms is dangerous. I am on record (and stand by) believing that Tuning Your Cognitive Strategies is not dangerous in a way that was causal in that suicide[4], except that it's a kind of a gateway drug into weird metacognitive practices and then you might find yourself doing weirder shit that either explicitly hurts you or subtly warps you in a way you don't notice or appreciate.

I think the way SquirrelInHell died was essentially (or, at least, analogous to) absorbing some Tacit Soulful Ideas, which collapsed a psychologically load-bearing belief in a fatal way.[5]

I might be wrong about Tuning Your Cognitive Algorithms being safe, but the point here is that "Listening to Tacit Soulful Wisdom" seems more risky.

I am publishing this anyway because I don't think it's intrinsically dangerous. And in the 2010s I saw several orgs avoid sharing their cognitive techniques widely for fear of them being dangerous in some way, and I think this didn't really help with anything and maybe caused additional harm because secrecy subtly fucked things up.

Meanwhile, I think "figure out how to listen to and share wisdom" is a very important topic to make progress on, and sharing info about it seems more likely to eventually capture the upsides as well as avoid downsides.

(see Don't ignore bad vibes you get from people for some more thoughts here)

 

PART I

 

An Overview of Skills

There are ~15 skills that seem important (each with their own subskills), but you can roughly group them into four clusters:

  • "Accurate, impactful storytelling"
  • "Accurate, impactful storylistening"
  • "Good judgment about what's healthy for you"
  • "Good judgment about what's healthy for someone else"

In both cases, the Giver and Receiver need to share an evocative experience that conveys something important. And they need deep clarity of the subtle moving parts, such that you are sharing a true story rather than a fake one. And they need to have good judgment about this being helpful rather than harmful.

Essentially, this whole concept is a reframing of the classic concepts: "Humans communicate important truths via storytelling", or "If we don't learn from history we are destined to repeat it." [6] This post's contribution is to break those down into specific subskills, and in particular introducing "story-listening" as a skill you can explicitly optimize.

So more specifically, the Tacit-Soulful-Knowledge Giver needs:

  1. Excellent introspection, so they understand exactly what the moving parts of the wisdom are, and how you would know if you had it, and what subtle ways you might think you have it, but don't.
  2. Excellent epistemics, and "mechanistic interpretability metacognition." The Giver needs to identify particular moments where the soulful-knowledge really mattered, and how they'd have made different cognitive motions depending on whether they had the wisdom, vs a pale shadow of it.
  3. Modeling of others. The Giver needs to check in with whether the Receiver is tracking subtle distinctions, and what sort of mental-moves will actually work for the Receiver (which might be different for what worked for the Giver).
  4. Pedagogy. Giver must communicate nuanced models clearly.
  5. Communicating importance on an emotional level, such that that the listener will really hear it (by this I mean something like "staring into their soul and conveying by tone of voice and eye contact that this thing is important, and is worth listening to", at a vibes-level).
  6. "Storytelling", or, "Conveying the qualia of an experience, and what it means". The Giver can communicate a visceral narrative that conveys an experience empathetically. (By this I mean more like stating clearly: "here is a story of what happened to me, and how it affected me, and why the weight of it is more significant than you might be tracking if you just hear the surface level words.")
  7. (conditionally optional) Creating a safe emotional space. The Giver is tracking the ways the Receiver might end up hurting themselves. (This can be compensated for by the Receiver having the "can handle delicate inner work safely on their own" trait. But I recommend both of them having it, so you have a sanity check on whether something bad is happening)

The Tacit-Soulful-Knowledge Receiver needs:

  1. To care. They need to earnestly believe an important belief needs to change.
  2. Persistent Effort. They have to be ready to put in a lot of cognitive effort, and for there to be be several points where they are tempted to say "okay yeah I got it" when they don't in fact got it yet.[7]
  3. The Receiver also needs excellent, gearsy metacognition, so they understand their own inner moving parts and which bits are are load-bearing.
  4. Knowing when they aren't ready to absorb something. They have something of "an idea sandbox" in their cognitive toolkit, allowing them to load up parts of an idea, check if it feels right to absorb right now, and in some cases do almost all the work to integrate except for "actually integrate it", to get a sense of whether it would be safe to absorb.
  5. Grieving, or some equivalent (i.e. letting go of beliefs that are important to you)
    1. This includes not only grieving for the psychologically-loading-belief that the wisdom is "about", but also a meta-level Letting Go of your desire to think "you figured it out", when you haven't yet.
    2. Also: "pre-grieving" (something like "stoic visualization"), where you partially grieve something pre-emptively, to free your mind up to reconsider something psychologically loadbearing (without necessarily updating). This helps with the "check if you're ready to absorb" step.
  6. Deep emotional regulation (i.e. not just shifting your current mood, but, building safe enduring emotional structures that can handle a loadbearing update)
  7. (conditionally optional) Tacit-knowledge interviewing skills, if the Giver isn't good enough at explaining things or understanding the gears of their own skills. Example question: "What are some subtle failure modes you might expect if someone said 'I got this', and described it like <insert Receiver's best summary of understanding>? What else might be missing?"
  8. Empathy / Imagination / "Experience Library." Some ability to simulate an experience they haven't had before. Or, "a library of reference experiences that are close enough to the required tacit-wisdom-experiences."
    • Alternate option: The ability to derive the model from the Giver (via interviewing skills) and then for the Receiver to "do their own storytelling" (checking in with the Giver that they they got the story right).

If the Receiver or Giver has high enough skills in one area, they can probably compensate for the other having lower skills, although there's probably some minimum threshold needed for each.

Note that while I feel fairly confident in "you want all these skills somehow in order to transfer tacit soulful knowledge", the exact implementation of the skills might vary wildly from person to person.

I natively run on stories, so the storytelling frame resonates particularly with me. But I know some people who run on math, who seem to do roughly the same things but with quite different processes from what works for me (and definitely different than what would've worked for me a few years ago).

The rest of this post circles around these concepts in meditative detail, with various examples. 

Storytelling as Proof of Concept

One reason I feel optimistic about this is that we already have "stories" as the proof-of-concept of communicating Tacit Soulful Knowledge. Not only that, but doing it at scale, even if with some lossy compression (which is more than most run-of-the-mill tacit knowledge transfer achieves).

For example: If you live through an actual dystopia, you will gain a deep understanding of how important it is not let your government become a dystopia. But it would suck if this was the only way a society could become wise about dystopias, since dystopias are sticky and you might not be able to recover.

Fortunately you can read 1984, and get at least some of the deep sense of how and why a dystopia would be bad. It's probably shallower than you'd get from living through a real dystopia. But it's notably not strictly worse either. A well written story isn't limited to "communicating the emotional heft." It can also spell out the life lessons more clearly than you'd necessarily acquire when living through a real dystopia (especially if the Propaganda Department Ministry of Truth is doing its job well and clouding your judgment).

You can read stories about the Holocaust and learn the depth of evil that has existed. You can watch the videos of the Milgram experiment and know that evil may be more horrifyingly mundane and non-special than you thought. These can be more impactful that merely reading a list of bullet points about what happened in the Holocaust. 

This is "tacit soulful knowledge" rather than "explicit non-soulful knowledge" because it gives you more visceral sense of magnitude and mattering than an abstracted set of bullet points can convey.

Motivating Vignette: 
Competitive Deliberate Practice

After my last workshop, one participant (George Wang) and I were discussing where the "deliberate rationality practice" project might go. 

George said:

"You know Ray, you've deliberate-practiced Thinking Physics and Downwell and some coding stuff, but I think if you deliberate practiced a competitive game, and pushed yourself towards at least an intermediate level, you would grok something deep about deliberate practice that you are probably missing."

That sounded important. 

But, one thing about deliberate practice is that it's exhausting and kinda sucks (that's the whole reason I think there's underappreciated "alpha" in it). Also, my AI timelines aren't super long. I only have time to deliberate-practice so many things. 

I found myself saying: "Hey George, uh, what if instead of me doing that, you and I spent an hour trying to invent the transfer of soulful tacit knowledge?"

"Are... you sure that is easier than deliberate practicing a competitive game?"

"I somehow believe in my heart that it is more tractable to spend an hour trying to invent Tacit Soulful Knowledge Transfer via talking, than me spending 40-120 hours practicing chess. Also, Tacit Soulful Transfer seems way bigger-if-true than the Competitive Deliberate Practice thing. Also if it doesn't work I can still go do the chess thing later."

"...okay then."

I tasked George with "Try to articulate to himself everything subtle and important about deliberate practicing a competitive game."

Meanwhile, I set about asking myself: "What soulful knowledge do I have, which I can use as a case study to flesh out the fledgling idea I have here?". 

I ended up focusing on:

"Okay, if I wanted to give people the tacit soul knowledge that they can defeat impossible problems, how would I do that, and how would I know if it worked?". 

Having the "Impossibility can be defeated" trait

What follows is approximately my inner monologue as I thought about the problem, edited somewhat for clarity.

Okay. I think I've had a deep, visceral experience that "impossiblity can be defeated." How would I tell the difference between someone who has the deep version I'm trying to convey, vs a pale shadow of it? 

What would someone who does have the "Impossibility can be defeated" trait do?

Well:

First of all, they don't spend a lot of cognitive effort thinking and saying: "this is impossible", and rehearsing excuses about that. If they notice themselves thinking thoughts like that, they might make some deliberate effort to reframe it as "I don't currently see how to do this, and it might turn out to be impossible in practice."

Second: When presented with something impossible/intractable-seeming, they reliably generate strategies that at least give them a bit of traction (such as making lists of why the problem is difficult, and brainstorming how to deal with each difficulty). And at least sometimes, the strategies work.

Third: When they encounter the "this is impossible" feeling, they don't become demoralized, or traumatized, or think more sluggishly.

Also, perhaps importantly, if they were actually "wise" here as opposed to merely "having the sense that they can defeat impossible things (but with bad judgment about it)", they also wouldn't grind away at problems with plans that aren't actually tractable. They are calibrated about their limitations. They know when to take breaks, or even to give up. They cleverly, straightforwardly attempt to attack impossible seeming things for awhile, but if they have zero traction they don't force themselves to keep going in a zombie-like cargo cult of agency. 

(I notice that I am confused about how exactly to navigate the two alternate failure modes right now, but if I were able to convey the wise version of this thing, I would need to somehow resolve that confusion. My current trailhead is "there is a subtle difference in feeling between 'I'm beating my head against a wall and I should do something else for a while, and maybe come back later' and 'this is impossible and stupid and I should give up'", but I couldn't articulate the difference very well. Yet. Growth mindset.)

Now, when I imagine someone who currently will say to themselves "this is impossible", someone who does not believe me when I tell them "I believe you can do it",

...what things go wrong when I try to tell them?

Well, first is that they just won't listen at all. They'll hear my words, but not believe that it's worth engaging.

How might I address that? 

I suspect if I sat with them, one-on-one, and (depending on the person), asked either in a gentle way, or an intense-tough-coach kinda way:

"Hey, I feel like you aren't really listening to me. I don't know that now is the right time for you to hear what I am trying to say but I feel like there is something you really aren't even trying to hear, and I would really like to try to convey it to you."

And then chat back-and-forth, which depends on how they respond to the first bit.

...that might work if we have established some trust, and I also do a sort of "stare into their soul and speak from the heart" kind of social move.

What fails next?

Well, the next thing that might fail is that they'll internalize a Ray-flavored Inner Critic who glares at them to "do the impossible" all the time, without actually really internalizing the cognitive moves that make it useful to try that. 

For this to be wisdom rather than mere soulful knowledge, it needs to come with some additional tacit skills (like noticing a particular flavor of "grinding pointlessly". And, maybe some concrete heuristics like "brainstorm at least 10 different ways of solving the problem, and at least 2 of which you've never tried before", and then if 30 minutes later you still feel the "grinding pointlessly" sensation, take a break for a while (perhaps days or weeks).

Mapping out the exact skill tree here would take a while, but I don't feel confused about how to do that. Let's move on for now.

"If it weren't impossible, well, then I'd have to do it, and that would be awful."[8]

I said earlier, "tacit soulful knowledge" often involves making a psychologically load-bearing update. 

If I put on that frame...

...I think a thing going on is that many people have some conception of being a good person/student/rationalist. Good people try hard when things are difficult, but they don't have to try hard when things are literally impossible.

In the 2010s, a few rationality teachers told me that people seem often unable to think about problems that they don't see any way to do anything about. Their brain often slides off even accepting the problem as real.

Also, people just really don't like doing effortful cognitive labor (and sometimes it's literally bad for them to try).

Impossible problems, by nature, are where your established tools won't be sufficient. You'll need to be "thinking for real" almost the entire time. Even when it's going well, it can be exhausting. (The two weeks I tried to get zero Thinking Physics questions wrong, I thought for four hours a day and then was completely wrecked in the evenings.)

So, propagating the update probably requires some combo of:

  1. Gaining the tools to actually solve the problem.
  2. Changing their conception of "what it means to be a good person", somewhat.

Example of "Gaining a Tool"

At a recent workshop, one participant expressed the "I couldn't have solved this because I didn't know a particular physics concept" concern.

Afterwards I told them about a time when I had done that that particular exercise, I had noticed that I was confused about that particular physics concept, and then I spent an hour exploring 2-3 hypotheses of how to invent that concept for myself. (Then I went on to get the question wrong for unrelated reasons, but not because I missed that particular step.)

The participant seemed to find that helpful, as a concrete example of "how to actually deal with this class of problem." 

Example of "Changing self-conceptions"

When I imagine a past version of me needing to somehow change my self-conception, what would probably work for me, but not necessarily others, is something like:

"You know, right now you're afraid to look at the possibility that something is merely 'very hard' instead of 'impossible', because if it were possible you'd have to do it.

"But, like, it's totally possible to realize that... and then just not do it. Right now, your morality says 'you have to do things if they are possible and worth doing', and, your psychology says 'you can't look at things that you'd be incapable of dealing with.' But, that's predictably going to lead to a distorted map of the universe, which will get you into all kinds of trouble.

"It's pretty hard to change your psychology here, but your sense of morality actually already has a term for 'if your morality and psychology are tying yourself up in knots, that would be a stupid morality, and you should reflect on it and change it. So your morality actively implies it's the right tradeoff, to realize something is important and possible to work on... and still just not do it. You're still a good person. You're clearly a better person that someone who self-deludes into thinking it's impossible."

(Getting that self-conception update into my bones, such that I really deeply believed it, might still be its own "tacit wisdom transfer" subproblem)

Current Takeaways

So, conveying 'Impossible Problems can be Solved' requires:

  1. Establishing some kind of trust that I have a thing worth listening to.
  2. Showing them one worked example of an impossible-seeming problem, and "here's some tools that solved it" (built entirely out of pieces that they already had at the beginning of the exercise)
  3. Some investigation and debugging of why they, personally, were averse to trying to solve it.
  4. Conveying several specific tacit skills of "how to make progress" and "how to tell if you are dealing subtle psychological damage to yourself by forcing yourself to go forward."
  5. Maybe updating their self conception (which may require recursing on this whole process)

Note that those are all the prerequisites to gaining the "soulful knowledge." We're not there yet, even if we have all these pieces. If I explain these things, and even if a listening person makes the self-conception update, that merely establishes a framework to consider safely whether to adopt the soulful knowledge. 

What is the soulful knowledge itself, in this case? What's the difference between someone who intellectually understands all the above, and someone who feels it in their bones?

Well, if they had gained the knowledge "the normal way", they would have lived through an experience where something had felt viscerally impossible, so obviously so that their thoughts had automatically veered away from even bothering. But then, they personally made the move of "well, okay but what if I did tackle it some way?" and then they came up with a strategy that seems like it might work, and then it did work. 

And then, they would have lived through a moment where it became self-evident that that visceral impossible-feeling wasn't a reliable signal.

Given all that, the final move here might be:

"After gaining an intellectual understanding of what is necessary, fully imagine yourself experiencing that narrative. 

Visualize yourself seeing the brute impossibility. 

Visualize yourself not being able to disentangle from that brute visceral feeling. 

Then, visualize yourself overcoming it.


While I was thinking about all this, George was thinking about how to articulate the wisdom of Competitive Deliberate Practice. I'll get to that in a moment, but first, I wanted to touch on...

Fictional Evidence

Spoilers for Harry Potter and the Methods of Rationality, and Steerswoman. The spoilers are important in some sense, but I claim are minor spoilers in this context, and I think are worthwhile for making the point.

 

I first learned about defeating impossibility through fictional evidence. 

I read Harry Potter and the Methods of Rationality, and learn that perhaps death can be defeated. I didn't actually care much about defeating death at the time, but what struck me to my core was the notion that you were allowed to set your ambitions on the level of "we are going to eradicate death." It had never occurred to me. 

The notion that I, and humanity, are allowed to just set our sights on tackling any problem we want, and carrying that fight through the centuries if we are not yet strong enough today, was something beyond me.

HP:MoR also taught me various tacit knowledge about rationality techniques (although much of it didn't fully crystallize until years later).[9]

Also, HP:MoR taught me, implicitly, that rationality gives you superpowers.

And, sure, the story is an exaggeration. And sure, the story explicitly warned me that things that seem hard are sometimes really hard and you're not going to solve them overnight and it might really be 30 or 100 years before a project pays off. Still, I ended up with a vague belief that rationality would give people superpowers. 

Maybe not overnight, but, like, surely we'd see notable progress of some kind with 7 years of effort.

Later I read the Steerswoman novels, and learned that actually, maybe rationality mostly doesn't give you superpowers. It just... helps you a little bit to find the right answer in confusing situations, which is useful some of the time but not always.

You can read the Bible, or listen to a sermon, and come away with a deep visceral sense of the depth of God's glory.

Now, this gets at the fraughtness of Fictional Evidence, and uncritically accepting Tacit-Soul-Knowledge transfer. 

Rationality doesn't give you superpowers within months or even years (given our current level of pedagogy).

And it sure looks like God is not real. 

There is still clearly something powerful going on in world religions, which I think should be respected in some fashion. But also, if you update directly on it the way they've framed things, you've failed an epistemic check.

In 2012, as CFAR was being founded, the vibe of "rationality can give you superpowers" was in the air. There was some evidence: sometimes you'd get 300% returns from a simple hack. But amortized over the time you spent hack-searching, rationality gives you more like a 10-20% annual return on investment if you're putting in persistent work (which most people don't).

I'm grateful to the Steerswoman novels for giving me a different framing on rationality, and for a visceral sense of what it's like to realize how an impactful story oversold its promise. 

But, if I had to pick one, I would still basically take the visceral update from HPMOR over Steerswoman. 

Rationality doesn't give you superpowers overnight. But, more is possible. Seven years into my rationality journey, I wrote Sunset at Noon, where I observed "oh, huh, I never noticed the change while it was happening, but I observe I have my shit together now."

Seven years after that? Well, I currently believe that a few skills have clicked together for me, in a way that I expect to become greater than the sum of their parts. I have not yet done anything demonstrably awesome with them yet (though I think I demonstrably have my shit even more together than I did in 2017).

We'll see in another seven years if I'm standing on a big heap of utility. But I feel like I can see the end of the tunnel now. 

I think it will take a while longer for rationality to demonstrably give people superpowers, the way that science demonstrably increased humanity's power a thousandfold. 

But, I do bet that it will. (Happy to operationalize bets in the comments.)

Yes, fictional evidence can lie. Even fictional insights can lie. But, they can also transmit truth. And yep, navigating that is rightly scary and fraught. But for communicating a visceral, impactful truth, it is sometimes quite important for the story to be larger than life, and to reach through with a thousand sprawling tendrils, to feel the weight of it in your soul.

 

“Fairy tales do not tell children dragons exist. Children already know the dragons exist. Fairy tales tell children the dragons can be killed.”

– G.K. Chesteron, Terry Pratchet, and/or Neil Gaiman

 

 

PART II

 

Competitive Deliberate Practice

Okay, that was a lot of talk and models. What's the payoff?

The evidence I present is not as satisfying as I'd like – what I did with George was mostly trace the outlines of a hole-in-my-heart where wisdom should go. But I will spell it out here as best I can. And maybe, someone in the comments will tell a compelling and complete enough story that I come to believe it more fully.

I chatted with George for 2 hours, about Competitive Deliberate Practice.

Step 1: Listening, actually

A cup is most useful when it's empty
– Lao Tzu

As the conversation began, I had just spent 20 minutes brainstorming on how one might share the 'Impossibility can be Defeated' exploration. I intended to cross-reference that with George's notes. I expected George to know all kinds of things I didn't know about Competitive Deliberate Practice, but not to have as much scaffolding for how we were going to transmit the visceral soulful qualia.

I was, at that moment, feeling exceedingly clever. 

I had a very clear story in my head of how I was going to operationalize Tacit Soulful Knowledge Transfer which was going to be a big fucking deal if it worked.

And then I turned to George, and I mechanically looked over my notes from the "Defeat Impossibility" thing and...

...a horrible thought hit me.

My model explicitly said, for this to work, I would have to listen. 

Really listen. 

Deeply listen. 

Get over my ego, get over my sense that "yeah I get it". 

And here I was, with my mind mostly full of thoughts of how clever I was.

I was... going to have to let go of that self-impressed feeling of cleverness, and get ready to hear whatever George had to say.

Shit. *Gut punch*

Then I settled in, and took a deep breath. I brought half-formed memories of times I had really listened, or someone had really listened to me. I remembered the difference between dutifully Trying to Listen, and Actually Listening.

And...

shit. gut punch

It occurred to me: 

My model said that the thing I was about to attempt to do was maybe possible, but exceedingly hard if so. It involved imagining experiences I hadn't had yet. Fiction suggested it was possible. But, I wouldn't have to merely listen as well as I had listened before. I would have to do better than that.

My intellectual model said: There would be multiple moments where I would think that I had gotten it, and I would feel excited to start talking about my ideas about it, but I in fact still wouldn't have gotten it, because the whole point here was that the nuances are subtle, and you have to hear each individual one. And I should expect that to happen multiple times. 

And then probably, it would only half-work, at best.

And while my explicit considered belief was "I think I understand all the pieces of this skill, but I do expect the skills to be legitimately hard even if it's theoretically possible"... 

...I still did kinda basically expect it to work reasonably well on my first try. 

And that was silly. And I felt ashamed.

And then I told myself to get over myself and start listening.

George started saying some things. I started asking questions. The conversation went back and forth, with me constantly asking: "but, why is this unique to competitive deliberate practice? If I were playing single player chess against a computer in a world with no competitive chess scene, what would be missing?". And whenever either I felt confused, or he felt I had misunderstood something, we slowed down and teased out the subtle distinctions that might have been lost. I feel proud of how that part went.

I don't yet know how to convey the art of tacit interviewing in this post. I'd have to see a few people fail-to-understand it before I knew what words to say to get the real point across.

But, at the end of the day, here are the slivers of wisdom that George and I articulated together:

The scale of humanity, and beyond

If you beat a single player videogame, was that hard? What did you achieve?

If you get 1000 points, or 10,000, what does that mean? If you get a million points, you might say "well that seemed like enough I guess?" and move on to something else. But what did that say about your competence?

Who knows?

If you play chess against people, you can watch your Elo score rise some percentile. If you look at the range of Elo scores, at the top humans, you'll see how your improvement compares against that scale. 

You began at some percentile. You saw your dozens or hundreds of hours of effort, and you saw how that effort brought you to some slightly higher percentile. 

You will have some sense of how small a hundred hours of effort are, and how wide humanity's range is.

This random PDF says that 605 million adults play chess. If you move a few percentage points along the scale, millions of people pass behind you, and hundreds of millions more might still be ahead.

Today, there are AIs that play chess at a higher level than any human; you will see how far the range of possibility extends beyond humanity, so far, in this one domain.

I spent ~40 hours practicing Thinking Physics puzzles. I got better at it. I care more about Thinking Physics than chess. But how good am I at solving Thinking Physics puzzles now? I dunno, man. There's no competitive Thinking Physics community to compare myself against.[10]

George brought this up initially in the context of "appreciating how much your deliberate practice efforts are worth," but it seems like something importantly general: "How big is human competence", and "how much bigger is AI competence" seems like they should seep into a bunch of different subtle judgments. They should start to give you an inkling of how much competence matters.

Since the conversation, I've made some attempt to meditate on this, similar to Nate Soares's meditation on large numbers. I don't think it's super sunk in yet.

But, I'd guess I'm not strictly worse here than everyone who's lived the experience. I bet some people level up at chess, and see themselves move along an Elo percentile, but don't particularly track how many thousands or millions of people they just moved past. I expect truly grokking the scale to require deliberate effort beyond what happens automatically through winning games.

Competitive Spirit

This wasn't one of George's main points, but it came up while I was trying to brainstorm for myself why competitive games might be different from non-competitive games.

When I get into a competitive mindset, I tap into a kind of... primal power that I don't always have access to.[11] The visceral sense of that primal power is probably hard to explain to someone who's never experienced it.

It so happened I already had a library of experiences about competition that felt soulful in some way. Here are some of them.

Is your cleverness going to help more than Whatever That Other Guy Is Doing?

At my first cognitive bootcamp, I had people solve an online puzzle without a clear win condition, hosted on a website. I saw one person cleverly editing the website code to change the parameters of the puzzle to do experiments.

I told them "Yep, you're very clever. Yes, that is allowed in this exercise. Do you think that is helping?"

"Yes", they said.

"Do you think it's going to help you solve the exercise faster than John Wentworth over there?" (John was sitting across the room)

They said "Oh. Uh, ...maybe?".

I don't actually know how useful their clever meta-gaming was, but the exchange seemed to get them asking "is this actually going to differentially help?" It made it very salient that reality was capable of judging whether or not it was an effective strategy. Either their tactics would get to the answer faster than John Wentworth, or they wouldn't. And this ignited a bit of a fire in them that wasn't there before, when they only had to compare themselves against their imaginary alternate self who followed some other less-clever-feeling strategy.

Comparing themselves to a particular other person also feels relevant to the aforementioned "human scale" thing. 

(In the end, I think the two of them solved it at roughly the same time.)

Distaste for the Competitive Aesthetic

My default playstyle for most games is "be the silly guy trying to do weird things with the rules (regardless of whether that turns out to be useful)" or "the guy trying to tell a story, more than win a game."

Competitive players often ruined the parts of the game that I liked. Storytelling and silly rulebending depend on some slack in the game, where the play is not so cutthroat, and you can afford to do goofy things.

When I play Minecraft, I prefer the early game, when the world is unspoiled and beautiful and Miyazaki-esque, and also there are natural resource limitations that you have to work around in a Survivor-esque way. There is something beautiful about it, and I feel horror and disgust when people show up who want to Race Race Race to get to the end of the tech tree and bulldoze villages and build digital factory farms.

When I got into feedback-loops and deliberate practice, it started to dawn on me that my aesthetic distaste for competition might meaningfully get in the way of me developing important practice skills. I might want to acquire the "Competitive Spirit Aesthetic."

I was scared about this. My distaste was, in fact, psychologically load-bearing for me.

I don't have a crisp story of how this played out – I made some bargains with myself that I would try to hold on to my capacity for silly or storyful play. I think it's helped that I only adopt "competitive mindset" in some kinds of of games. I still play Minecraft the same way as always.

But what ultimately allowed me to tap more into Competitive Mindset was leaning into other things I did care about (including, the deliberate cultivation of new aesthetics in light of new facts).

Building your own feedback-loop, when the feedback-loop is "can you beat Ruby?"

Last year my colleague Ruby took me Go Karting. Ruby does a ton of Autocross driving, and enjoys the thrill, and has a bunch of experience.

I decided to treat the day as an opportunity to Purposefully Practice Purposeful Practice, and try to figure out the skill tree of a new activity from first principles. One issue was that I needed to evaluate if I was getting better. But on the Go Kart track, there was no indicator of how long a given lap around the track took. 

Eventually I figured out, a feedback loop that existed was "do I seem to be keeping up with Ruby?"

The connection between my obsession with feedback loops and my dream of a new spirit of rationality was enough to bridge some kind of gap, and I leaned into "okay, I'm now trying to beat or at least lose less badly to Ruby." And this did seem to help me figure out what exactly it meant to take turns wide, and why sometimes it was appropriate to accelerate but sometimes to brake in pursuit of speed.

...back to George

All of those previous sections occurred to me and I mentioned some of them to George.

We talked a bit about the qualia of competitiveness. We had discussed the Scale of Humanity, but many of the ideas we'd subsequently talked about it hadn't seemed especially about "competition" in particular. 

I was excited about the Qualia of Competitive Spirit because it felt like a real example of something felt, which couldn't easily be transferred in words. 

But... eventually I noticed the niggling feeling that George was seeming less engaged. And then I realized... I had been the one to suggest "competitive spirit", not George. Maybe it would turn out to be important, but it wasn't what he was particularly trying to talk about.

Oh shit I'm still supposed to be listening.

Christ. Listening sucks.

shit. gut punch

"Mature Games" as "Excellent Deliberate Practice Venue."

We went back and forth about a bunch of topics where George would articulate something important about deliberate practice, and I would ask "but why is that specific to competitive practice?" and his answer wouldn't feel satisfying to me.

Eventually, something crystalized, and we agreed:

It's not just that Competitive Deliberate Practice teaches something unique through the process of competition. Sometimes, it's simply that competitive games – especially mature games like Chess that have endured for centuries – are dramatically better avenues to Deliberate Practice (and, to Deliberate Practice Deliberate Practice), than many other arenas.

If you play an arcade game by yourself, in a vacuum with nobody around to compare yourself to, not only is there no human scale, you also have to invent all your practice techniques for yourself. 

Mature competitive games, with millions of fans, develop entire ecosystems of people developing techniques, and people who teach those techniques. It is known what techniques need to be taught in what order. It is known what mindsets make sense to cultivate in what order.

You can innovate and beat the system with cleverness, but there may be bitter lessons about how much cleverness actually helps, compared to sucking up your ego and learning from the established wisdom first. Maybe your cleverness will help, maybe not. Reality will arbitrate whether you were correct. 

"Deliberate Practice qua Deliberate Practice"

You know what they call Purposeful Practice that *works*? Deliberate Practice.[12]

– Tim Minchin me

Most of what I've been doing since I started my journey is not "Deliberate Practice." Deliberate Practice is a technical term which means "a practice regimen that has been time tested and reliably works."  

Most of the time, the thing I do is is use Noticing skills to identify little micro feedback loops, and use reasoning to identify the hard parts of what I'm doing, and try to keep the practice focused on pieces that push my current limits. Those are all important ingredients of Deliberate Practice.

But ultimately, I am guessing about what will work, and trying a bunch of inefficient things before I find a given thing that seems like it actually works, and I lack a human scale to tell me very clearly how I'm doing on the micro level. There might be some particular subskills where someone somewhere might know how I'm doing on an absolute scale, but I don't think they'd be able to track overall how I'm doing in "the general domain of solving confusing problems."

There is something very fun and innovative and clever-feeling about inventing my own practice as I go.

But, a thing I suspect (although George Wang didn't explicitly say), is that working in a truly "Deliberate Practice qua Deliberate Practice" domain would give me some visceral appreciation for the usual set of skills that feed into deliberate practice, which probably include a greater percentage of time finding established exercises, teachers, etc.

Feedback loops at the second-to-second level

Normally, if you play a game of chess, or a sport, you go off and do a bunch of stuff, and eventually you win or lose. The process of credit assignment is hard. Was a particular play good, or was that you falling into the opponent's trap, but then getting lucky?

With proper deliberate practice, you sometimes have instructors watching you, critiquing your form at the second to second level. Because they are experts who can win games and have taught other experts before, you can actually just trust what they say. Your brain learns better when it gets to update immediately, rather than "some time later."

Living the experience of second-to-second feedback loops might help drive home how noisy and slow the usual feedback loops are, when you're trying to practice in a domain without such instruction. It might give you some sense of what to be aiming for when you construct your clumsy-by-comparison Purposeful Practice. Or, give you more motivation to actually find expert advice on the subskills where that's even possible.

Today, in some domains (Chess, Go), we now have AIs whose knowledge so surpasses humans that you can get a basically perfect correction to every move, as soon as you make it. (And perhaps, not too long from now,[13] you'll even get an integrated LLM that can explain why it's the correct one)

Oracles, and Fully Taking The Update

Eventually, George articulated a different lens on the previous point.

It's not just that the feedback is so rapid.

It's that when you can fully trust a source of knowledge about your practice, you don't have to do any second guessing about what the correct move was. You will instantly know that you have made a wrong choice, and learn what would have made a better choice. You'll at least learn that at the pattern-matching level. 

Eventually, you may get a visceral sense of what it's like to be able to fully make an update, all the way, with no second guessing.

This seemed important, beyond the domain of Deliberate Practice. A visceral sense of when and how to fully take an update seems like something that would come up in a lot of important areas.

But what do you do differently?

That all sounds vaguely profound, but one thing our conversation wrapped up without is "what actions would I do differently, in the worlds where I had truly integrated all of that?"

We didn't actually get to that part. Here are my current guesses:

Re: "The magnitude of the human scale"

  • Recognize how high skill ceilings can be, and what effort is required to improve your competence.
  • When making plans, have an accurate model, rather than wishful thinking, of how quickly you can gain mastery over things.
  • As you navigate the world, have a better sense of how competent various people are likely to be at various things. Have a sense of what masters, intermediates and beginners tend to think.

Re: "fully taking the update"

  • When I'm considering how much to update on something, I'd have a clearer sense of the specific mental moves I would make, if I were to update fully. I have less hesitation to do so. (But, hopefully, also a calibrated sense of when to do that)

Magnitude, Depth, and Fully Taking the Update

That last point actually connects back to a central thing about Tacit Soulful Knowledge.

Let's recap some proposed wisdoms:

  • Burnout is real, and you must account for it when optimizing your productivity.
  • Projects are often hard, in a way that requires you to actually pivot your plans, and switch from a "executing" mode to a "gathering info" mode.
  • Sometimes, impossible-feeling problems can be solved.
  • There is untapped benefit in rationality, if only the art were to be developed more.
  • The scale of competence is vast, and moving along a small slice of it can still take dozens, hundreds or thousands of hours.
  • There is a mental motion of being able to "fully take an update, all the way", and in some circumstances it's the right motion.

(btw, if you don't buy some of these, that's quite reasonable! For the most part I haven't argued for them, just tried to convey why I, or someone else, believes in them. For now, my main goal is to have them as illustrative examples, so you can at least see how some mental motions might fit together around them.)

In most cases, someone without said wisdom will nod along to the sentence. It doesn't sound wrong. But, when the time comes to act on it, they'll only pay superficial heed to it. A commonality so far between the Possible Tacit Soulful Knowledges I've examined is that there is an update worth making, but it is much deeper than you're intuitively expecting, with more far reaching consequences. 

The grad student vaguely knows that sometimes you work too hard and exhaust yourself. But they haven't been deeply fucked up by this in a way that was fairly enduring and required changing their priorities in a deep way.

The startup founder vaguely knows they'll need to learn more things, but don't have the visceral expectation that they might really try hard for a year, fail to learn anything new, and that right now they really need to slow down, switch to "gather information" mode, and actually propagate the update that they need a specific story for why their plan is special.

Is there a simple, general skill of 'appreciating magnitude?'

It's possible that a lot of "listening to wisdom" might look less like thoughtfully, emotionally propagating an update... and more like "just, like, actually do math?"

Or, rather: there might be a skill to propagating math through your bones – feeling different orders of magnitude and intuitively rearranging your beliefs and actions accordingly.

Appreciating numerical magnitude is it's own kind of difficult wisdom. People are known to be bad at it. It requires not merely viscerally feeling that 100 is 10x bigger than 10. It requires having some sort of schema such that your priorities naturally weight 10x things appropriately (without following naive consequentialism off a cliff and missing a lot of wholesome value in things that are hard to measure).

Various people have tried communicating this. Nate Soares wrote Respect for Large Numbers, where he attempted to mediate on one hundred objects, and hope that that would transfer to other domains. 

Andrew Critch once suggested imagining a 10x10x10 cube, which simultaneously can tell you how much bigger 1, 10, 100, and 1000 are than each other... and yet also fit them all in your visual field at once. He's also suggested looking at scaleofuniverse.com, which lets you scroll your zoom from the plank length to human scale to the entirety of the universe, and notes that there are in fact only 62 orders of magnitude across the whole thing! You only need to pinch-zoom the universe 62 times to get from the smallest thing to the biggest. 

If you could develop some intuitive sense for how to relate to each level-of-scale, maybe you could just look at a phenomenon, see "what scale" it's on, and then adjust your behavior accordingly?

Unfortunately, these sorts of things don't seem to actually land for most people. Critch has noted he's tried tons of ways to convey numeracy to people in a way that lets them see how it fits into their lives. When I asked him what was surprising about his time at CFAR, he noted:

People are profoundly non-numerate. And, people who are not profoundly non-numerate still fail to connect numbers to life. 

I'm still trying to find a way to teach people to apply numbers for their life. For example: "This thing is annoying you. How many minutes is it annoying you today? how many days will it annoy you?". I compulsively do this. There aren't things lying around in my life that bother me because I always notice and deal with it.

People are very scale-insensitive. Common loci of scale-insensitivity include jobs, relationship, personal hygiene habits, eating habits, and private things people do in their private homes for thousands of hours.

I thought it'd be easier to use numbers to not suck.

I currently feel only partway through my own journey of "appreciating large (or small) numbers." When I look at my past, and see why I would have flinched or rejected "appreciate magnitude" I see glimpses of the tacit skills I now just barely am good at, and what deeper tacit skills I might yet need to acquire. 

I don't feel ready to write the post on that yet. But I can see the glimmers of a general skill, which, if one managed to learn, might let you shortcut a lot of the emotionally effortful process of internalizing wisdom from evocative stories.

 

PART III

One might think the trick is to just listen to people as if they have wisdom all the time, even if you don't understand it yet. Alas, there are pitfalls on the other side of the equation. Beware.

Tacit Soulful Trauma

Sometimes, friend warn me about failure modes they think I'm about to walk into.

And sometimes it's difficult, because while I can see there is a real important thing they want to convey to me, they also are clearly triggered about it. I can't tell from the outside whether the problem is that I need to update harder, or that they need to process something and let go of a clinging-orientation.

Once, I failed to lock the house door a few times. A roommate said to me "I want you to understand that crime actually exists. We've actually had people have houses broken into in this neighborhood." Another friend added "And, like, getting broken into doesn't just mean you lose some stuff. There is something violating and scary about it, that at least rattled me and I think would rattle you too."[14]

That interaction made sense and seemed fine. I didn't actually really update that hard about crime, but I think I probably should have. 

But, I've also known people who seemed worried about things (such as crime) in a way that seems more like paranoia, taking more precautions than are worth it, making everyone around them take more precautions than seem worth it.

It's hard to tell the difference from the outside. 

I think part of the skill here is to ask what they might know, that is true and helpful, even if parts of it are wrong or overcompensating. Try to build a complete model of what they believe, and why they believe it, without necessarily adopting that model as your own.

Cults, Manipulation and/or Lying

One thing that gives me a bit of pause on all this is that I've known a few cults (which, by selection effects, were mostly rationalist cults). I have heard of others.

I don't like people overreacting to accusations of cultishness. People complain about the rationality community being cultish. I think this is somewhat true, but mostly not. Whenever someone tries a high-effort, intense community, and fails, I think people blame the vague-culty-pattern matching without really grappling with the gears of what actually went wrong.

But, just like crime, cults are real, and bad. 

(Signs of bad cults: encouraging you to give up ties with outsiders, activities that result in you being chronically underslept; demanding you conform to a totalizing worldview and making you feel bad if you object to even part of it; asking you to do things that make your stomach squirm a little; and, feeling like if you were to actually question the leadership or the ideology it wouldn't go well for some reason).

One of the thing going on in some of the cults I've seen, from the sidelines, is something like "Telling a bunch of stories, which convey a lot of frames, which make you question your old frames all at once, overloading you with soulful knowledge faster than you can really judge it." (This isn't automatically the sign of a bad cult IMO, sometimes you're just discovering a new worldview and that's fine. But if it's combined with the other things it exacerbates them.)

Also, sometimes the impactful soulful stories they tell are basically made up. People lie.

If you're at a rationalist cult, they'll... probably say words not too dissimilar to "we have a model of how to transfer soulful tacit knowledge." Which, uh, has me feeling a bit antsy, and led me to add a lot more disclaimers to this post. Right now I am basically planning to build an intense rationality training community, and it will (eventually) probably end up somehow integrating the ideas in this post.

I have seen at least 3 rationality-adjacent subcultures that got really into various flavors of "mental tech", which fucked up people in a few ways. (Shortly after coming up with the ideas here, I ended up chatting with a group about it that included a younger person, who started saying words like "mental tech" and "I think I'm pretty good at not being manipulated or hurt by this sort of thing" which rang serious alarm bells in me, because it's the exact sort of thing that several people who later hurt themselves or others said)

Doing serious justice to how to navigate this is its own post. But I wanted to repeat it here, in its own section for emphasis. I hope this conveys why I'm somewhat worried, and also gives me a bit of accountability for what I might do next with the ideas in these posts (i.e. having more people who are vaguely tracking that something might go wrong)

Sandboxing: Safely Importing Beliefs

A while ago, a colleague said "You know Ray, it seems like you directly import other people's beliefs and frames straight into your worldview without, like, type checking them or keeping track of where they came from. And that kinda sketches me out." 

"...go on?"

"Like, when I'm getting a new belief from someone, in the metaphorical codebase of my mind, I don't just do a global import frame from Alice. I... like, import the import alice_worldview and before I run alice_worldview.frame I inspect the code and make sure it makes sense."

That person had a much more structured mind than I did at the time. Their cognition runs on math and logic. Mine ran mostly on narrative[15]. I didn't seem to actually be running into the problems they were warning about, and I didn't update much on it. 

But, last week (a few years later), as I mulled over importing tacit soulful knowledge that might secretly be confused trauma from other people, I was like "you know, I think I should go back to Bob and ask him to explain that better."

I don't think I got his tacit skills here very deeply, but one straightforward skill he mentioned was "noticing when you're bouncing off something, and making conscious choices about that." If something feels off when you look at a new idea, or you feel like you're bouncing off it, check in with yourself about whether it seems right to keep thinking about it right now. 

A few things to consider:

  • An idea might be false, or mislead you to believe other false things or take worse actions.
  • An idea might mess with important coping mechanisms. You might be ready for it later when you've figured out better coping mechanisms, or grieved something.
  • An idea might be too complicated for you to grasp. (Consider writing things down or drawing rough diagrams, to help expand your working memory)
  • You might have recently absorbed too many other ideas, and you may just need some time for your brain to compile them down into your longterm, integrated memory. (Or, you might need to think through and actively consolidate them first)
  • An idea might not be fitting into your usual frames/ontology. You'll eventually need to look at it a different way. You might try asking your partner to explain it differently.

For all of these reasons, for the immediate future, you may want to consciously check "do I want to think about this right now?". You can come back to it later. Or, maybe just never come back to it.

Asking "what does Alice believe, and why?" or "what is this model claiming?" rather than "what seems true to me?"

If it seems safe to at least consider an idea, but you're not sure you want to fully take it in, how can you do that?

My current guess is something like: Instead asking "Is this true?", ask "what does Alice believe?" or "what's a consistent model that Alice is pointing at, even if Alice can't articulate it very well herself yet?". Try to put that model in a staging area in your mind, where you're deliberately not plugging it into your meaningmaking center, or integrating it into your own map.

Pre-Grieving (or "leaving a line of retreat")

A further step you can take is "pre-grieving." Suppose a new idea would mean that something sacred and important to you was wrong, or gone. You aren't sure if the new idea is true, or if your sacred idea is false. A thing I've done sometimes is "grieve in advance" (a variant of stoic negative visualization), where I imagine what the world would look like, and what I'd have to do, and what emotional processing I'd need to do, if my sacred thing was false or gone. Do some of the emotional processing, but without "the final steps" where you really accept and internalize it.

Then, I'm able to think more clearly about whether or not my sacred thing is false.

(Even better, see if you can identify multiple different things you might want to pre-grieve, so that you don't overanchor on "do I need to grieve this specific thing?").

 

EPILOGUE 

So, that was a lot. 

You're probably not going to remember all of it. 

For now, I'd like to distill out:

  • Some guesses at practical advice
  • My guesses of a sort of "longterm research direction" to take these models. 

The Practical

Learning to listen

Much of this post deals with deep, subtle skills. I think grieving, noticing, articulating tacit knowledge, and emotional awareness are all important, and worth investing in, and necessary for the upper limits of what's possible here.

But if you aren't that strong in the above skills yet, I think the main thing to takeaway is: "Listening is a skill, and you can learn to do it better, and it has a pretty high skill ceiling." Most people, even pretty good listeners, could learn to listen more deeply.

When to listen deeply

A major difficulty with "deep listening" is that it's expensive, and you can't do it literally all the time. (If you get good at it, it gets cheaper, but there's also tons of skills you could be leveling up such that they're cheaper). What do? 

Two cases come to mind:

Trigger #1: 
Your conversation partner keeps repeating the same points, in a somewhat agitated way.

Maybe, this means there is something you aren't getting. Or, maybe it means, you really get the point and they are agitated for unrelated reasons, or aren't listening to you. 

Trigger #2:
*You* keep repeating the same points, in a somewhat agitated way.

Maybe you are picking up that your partner hasn't seemed to grasp the significance of what you're saying, and you're trying to "say it harder." (this rarely works AFAICT). Or, maybe you're sort of tunnel-visioned and not really realizing that yeah, they get it, but there is something else they are trying to convey to you.

In both cases, I think the thing to do is some kind of check of "are either of us actually missing anything here, or can we just chill out and move on?". Sometimes, it's immediately obvious when you think about it for two seconds. Sometimes, it's helpful to ask out loud "hey, I notice you keep repeating this point. Can you say more about what you think I don't get, or haven't really listened to yet?"[16]

"To Listen Well, Get Curious"

There's lots of advice elsewhere on how to listen. But I think an important one is Ben Kuhn's "To listen well, get curious."

Frequently [advice to] “just listen” comes with tactical tips, like “reflect what people said back to you to prove that you’re listening.”

Recently, I realized why people keep giving this weird-seeming advice. Good listeners do often reflect words back—but not because they read it in a book somewhere. Rather, it’s cargo cult advice: it teaches you to imitate the surface appearance of good listening, but misses what’s actually important, the thing that’s generating that surface appearance.

The generator is curiosity.

When I’ve listened the most effectively to people, it’s because I was intensely curious—I was trying to build a detailed, precise understanding of what was going on in their head.

Ben's post is in the context of giving people advice. Here, I think it's both relevant to an aspiring Knowledge Giver (to make sure they understand what's going on in the Listener) and in the Listener (to really soak in the worldmodel of the Giver).

I think the curiosity is important, but I do nonetheless think "focus a lot on paraphrasing" is a pretty useful tactic, if at least one person in a conversation thinks the other is missing something. 

This has an obvious practical benefit of "you get to check if you in fact understand each other." It also tends to change the style of conversation in a way that short-circuits some failure modes (where people keep actively talking over each other, tunnel-visioning into their own particular worldview)

Listening is (slightly) contagious.

Another important point – say you wish someone else would listen to you. Many people have a naive tendency to get more insistent. But this often backfires. People tend to (somewhat) mirror the behaviors of people around them. If you get more insistent and locked into  your thing, they'll get more insistent and locked into their thing. 

Counterintuitively, shifting to do more listening, yourself, tends to open up the conversation such that they are more willing to listen themselves. (Both because of general mirroring, and because they might be anxious about you not hearing whatever they have to say, and they tend to chillax more after you've demonstrated you heard it)

Grieving and Listening

I think the hardest part about "deeply listening" is that I have a self-image of someone who already is at least pretty good at listening, especially when I'm actively trying. Or, there are things I really want to do that aren't listening. 

So for me, the next level above "just try to get curious and listen, at all", is to notice when I am somehow averse to putting more work into listening, or telling myself I'm already doing it "enough", and (usually) do some kind of microgrieving about it.

The Longterm Direction

Partly, I'm just hopeful about me, and a few more marginal people around me, being better at transmitting complex, soulful ideas.

But, what feels particularly exciting to me is that this feels like a plausibly-complete map of what's necessary to transfer deep tacit knowledge. This post is phrased in terms of "skills", but the actual important thing here is more like "properties a two-person-system needs to have somehow." If you can accomplish it without anyone having to learn "skills", all the better. Skills are expensive.

My dream is that someday, awhile from now, humanity will have some kind of effective soulful knowledge transmission engine that works really well, at scale.

So, followup work I'd like to see, or do myself:

  • Check if other people feel they have given or received "soulful knowledge" somehow, and see if they seemed to do it in a way that maps onto these ideas, or if it was a totally different way. Are the major pieces missing? Have I only really focused on one type of "wisdom" or "soulful knowledge", and are there are other types that require other stuff?
  • Just have some people try out the approaches in this post, and see how it goes, and write down the results.
  • Just practice it myself for awhile, until I feel like this has shifted from "a cool idea that I've tried a few times that seems promising" to "a deep integrated skill", and see if there are more conceptual updates to be had about it.
  • Think about ways to shortcut some of the work here. What's the simplest possible version of each skill, that works for the combined engine? Can those skills be converted into simple english sentences I could say, that most people would be able to do without learning new nuanced skills? If not, what are the minimum prerequisites?
  • In particular, think about the "appreciating magnitude" skill, and how to convey it at all to people, and see if it's able to generalize somehow.
  • What are hypothetical magical tools that would obviate the need for those skills?

Happy listening.

  1. ^

    Notably, the "Solve a random Thinking Physics" exercise with 95% confidence is essentially trying to be a simulation that conveys tacit knowledge in this way, with the forcing function "until you can do this reliably, calibratedly, you aren't done." It also, sometimes, conveys something like wisdom, which is hard-earned in the sense that it's pretty expensive to do 10 Thinking Physics problems and get them all right.

  2. ^

    My calibration on the feeling "ooh, this insight feels promising" is something lowish (10% – 30%). But this feels more like I just deconfused myself on a Thinking Physics problem and am ~75% likely to get it right.

  3. ^

    (More on that later). 

  4. ^

    I do think there are people for whom Tuning Your Cognitive Algorithms is overwhelming, and people for whom it disrupts a coping mechanism that depends on not noticing things. If anything feels off while you try it, definitely stop. I think my post Scaffolding for "Noticing Metacognition" presents it in a way that probably helps the people who get overwhelmed but not the people who had a coping mechanism depending on not-noticing-things.

    I also think neither of these would result in suicide in the way that happened to SquirrelInHell.

  5. ^

    I am not very confident in this interpretation, but, I think the interpretation is a useful intuition pump for why you should be careful.

  6. ^

    See also Eliezer's Making History Available.

  7. ^

    This benefits from previous experience where "you thought you got it", when you didn't got it, such that you have some taste about the difference.

  8. ^

    Recently, I had to do some bureaucratic paperwork, on behalf of someone else. The actual task ultimately involved skimming some notes and making 6 checkmarks. I failed to do it for a day, then another day. On the third day, I said "You know Ray, if you busted out your metacognition tools you'd probably come up with a plan to get this done", and I felt the fury of a thousand suns inside me scream "Yes but then I'd have to do the paperwork."

    I grudgingly dialogued with myself, and then got a colleague to check in on me periodically, and I slowly bootstrapped into a being who could made progress checking the six checkmarks over the course of the day (still slowly and painfully dragging my feet about it).

    So, when you don't want to do my stupid metacognition exercises, I get it.

  9. ^

    People periodically complain about Eliezer's LessWrong Sequences or HPMOR being long and meandering, and say "we should rewrite those things to be simple and just get the ideas across." And, man, I agree they are hella long and we should get shorter intro materials. But a crucial element is that Eliezer is not just trying to give you explicit knowledge, he is not even merely trying to convey tacit knowledge. He is trying to convey a visceral, deep awareness in your meaningmaking-center, about living your life in the world where truthseeking is the central dance. This is a much more difficult project.

    Not everyone vibes with Eliezer's vibe. And I think it's probably worth having a "list of explicit knowledge" version of the sequences that's more bare bones. But, if you actually want to replace the sequences with something shorter, you have a much more complex jobs.

  10. ^

    Notably there are some competitions that might be more worthwhile. If I ended up deciding to put 100 hours into competitive deliberate practice in hopes of gaining a deeper appreciation of The Human Scale, I'd probably try something like Math Olympiad.

  11. ^

    A woman I know points out that this may be a fairly sex-differentiated experience. 

  12. ^

    A friend told me of a time he was talking excitedly with a woman – a concert violinist – about Deliberate Practice, and how it was verified and had important properties that seemed to generalize across domains and it was an interesting scientific achievement. The woman was really excited. My friend spelled out how it required tight feedback loops, practicing at the edge of your capability, and how people typically could do ~four hours of it per day.

    Eventually the concert violinist said "wait, so, like, you mean, like 'practice'?"

  13. ^

    Here I am making an explicit bet about how fast LLM progress will be, I realize they can't do this well right now.

  14. ^

    This actually maybe the earliest motivating case for this blogpost.

  15. ^

    I'm in the process of refactoring my thought process to be more structured, while preserving what's important to me about narrative.

  16. ^

    Epistemic status: I tried it like 2-3 times and it seemed to basically work.

  17. ^

    And I think also the reason Eliezer writes Shut up and do the "Impossible"

New to LessWrong?

Glossary

tacit knowledge
tacit soulful knowledge
metacognition
Show Unapproved
1.
^

Notably, the "Solve a random Thinking Physics" exercise with 95% confidence is essentially trying to be a simulation that conveys tacit knowledge in this way, with the forcing function "until you can do this reliably, calibratedly, you aren't done." It also, sometimes, conveys something like wisdom, which is hard-earned in the sense that it's pretty expensive to do 10 Thinking Physics problems and get them all right.

2.
^

My calibration on the feeling "ooh, this insight feels promising" is something lowish (10% – 30%). But this feels more like I just deconfused myself on a Thinking Physics problem and am ~75% likely to get it right.

3.
^

(More on that later). 

4.
^

I do think there are people for whom Tuning Your Cognitive Algorithms is overwhelming, and people for whom it disrupts a coping mechanism that depends on not noticing things. If anything feels off while you try it, definitely stop. I think my post Scaffolding for "Noticing Metacognition" presents it in a way that probably helps the people who get overwhelmed but not the people who had a coping mechanism depending on not-noticing-things.

I also think neither of these would result in suicide in the way that happened to SquirrelInHell.

5.
^

I am not very confident in this interpretation, but, I think the interpretation is a useful intuition pump for why you should be careful.

6.
^

See also Eliezer's Making History Available.

7.
^

This benefits from previous experience where "you thought you got it", when you didn't got it, such that you have some taste about the difference.

8.
^

Recently, I had to do some bureaucratic paperwork, on behalf of someone else. The actual task ultimately involved skimming some notes and making 6 checkmarks. I failed to do it for a day, then another day. On the third day, I said "You know Ray, if you busted out your metacognition tools you'd probably come up with a plan to get this done", and I felt the fury of a thousand suns inside me scream "Yes but then I'd have to do the paperwork."

I grudgingly dialogued with myself, and then got a colleague to check in on me periodically, and I slowly bootstrapped into a being who could made progress checking the six checkmarks over the course of the day (still slowly and painfully dragging my feet about it).

So, when you don't want to do my stupid metacognition exercises, I get it.

9.
^

People periodically complain about Eliezer's LessWrong Sequences or HPMOR being long and meandering, and say "we should rewrite those things to be simple and just get the ideas across." And, man, I agree they are hella long and we should get shorter intro materials. But a crucial element is that Eliezer is not just trying to give you explicit knowledge, he is not even merely trying to convey tacit knowledge. He is trying to convey a visceral, deep awareness in your meaningmaking-center, about living your life in the world where truthseeking is the central dance. This is a much more difficult project.

Not everyone vibes with Eliezer's vibe. And I think it's probably worth having a "list of explicit knowledge" version of the sequences that's more bare bones. But, if you actually want to replace the sequences with something shorter, you have a much more complex jobs.

10.
^

Notably there are some competitions that might be more worthwhile. If I ended up deciding to put 100 hours into competitive deliberate practice in hopes of gaining a deeper appreciation of The Human Scale, I'd probably try something like Math Olympiad.

11.
^

A woman I know points out that this may be a fairly sex-differentiated experience. 

12.
^

A friend told me of a time he was talking excitedly with a woman – a concert violinist – about Deliberate Practice, and how it was verified and had important properties that seemed to generalize across domains and it was an interesting scientific achievement. The woman was really excited. My friend spelled out how it required tight feedback loops, practicing at the edge of your capability, and how people typically could do ~four hours of it per day.

Eventually the concert violinist said "wait, so, like, you mean, like 'practice'?"

13.
^

Here I am making an explicit bet about how fast LLM progress will be, I realize they can't do this well right now.

14.
^

This actually maybe the earliest motivating case for this blogpost.

15.
^

I'm in the process of refactoring my thought process to be more structured, while preserving what's important to me about narrative.

16.
^

Epistemic status: I tried it like 2-3 times and it seemed to basically work.

New Comment


29 comments, sorted by Click to highlight new comments since:

I want to add two more thoughts to the competitive deliberate practice bit:

Another analogy for the scale of humanity point:

If you try to get better at something but don't have the measuring sticks of competitive games, you end up not really knowing how good you objectively are. But most people don't even try to get better at things. So you can easily find yourself feeling like whatever local optimum you've ended up in is better than it is. 

I don't know anything about martial arts, but suppose you wanted to get really good at fighting people. Then an analogy here is that you discover that, at leasts for everyone you've tried fighting, you can win pretty easily just by sucker punching them really hard. You might conclude that to get better at fighting, you should just practice sucker punching really well. One day you go to an MMA gym and get your ass kicked. 

I suspect this happens in tons of places, except there's not always an MMA gym to keep you honest. For example, my model of lots of researchers is that they learn a few tools really well (their sucker punches) and then just publish a bunch of research that they can successfully "sucker punch". But this is a kind of streetlight effect and tons of critical research might not be susceptible to sucker punching. Nonetheless, there is no gym of competitive researchers that show you just how much better you could be.

Identifying cruxiness:

I don't have a counterfactual George who hasn't messed around in competitive games, but I strongly suspect that there is some tacit knowledge around figuring out the cruxiness of different moving parts of a system or of a situation that I picked up from these games. 

For example, most games have core fundamentals, and picking up a variety of games means you learn what it generally feels like for something to be fundamental to an activity (e.g. usually just doing the fundamentals better than the other player is enough to win; like in Starcraft it doesn't really matter how good you are at microing your units if you get wildly out-macroed and steamrolled). But sometimes it's also not the fundamentals that matter, because you occasionally get into idiosyncratic situations where some weird / specific thing decides the game instead. Sometimes a game is decided by whoever figures that out first.

This feels related to skills of playing to your outs or finding the surest paths to victory? This doesn't feel like something that's easy to practice outside of some crisply defined system with sharp feedback loops, but it does feel transferrable. 

(FYI this is George from the essay, in case people were confused)

I like the point about the need for some type of external competitive measure but as you say, they might not be a MMA gym where you need one.

Shifting the metaphor, I think your observation of the sucker punch fits well with the insight that for those with only a hammer, all problems look like nails. The gym would be someone with a screwdriver or riveter as well as the hammer. But even lacking the external check, we should always ask ourselves "Is this really a nail?" I might only have a hammer but if this isn't a nail while the results might be better than nothing (maybe?) I'm not likely to achieve the best that could be done some other way. And,of course, watching just what happens went the other hammer-only people solve the problem and compare results to those when we know the problem is a nail we might learn something from their mistakes.

I think there are qualia that you can't transmit with words alone. Some ideas are psychologically load-bearing and to some degree, those are the ones that you can't create with just meta-cognition because they're strongly tied to deeply emotional beliefs.

Concretely, with the example of learning to play chess, there's a point where you get good enough to beat/draw everyone in an average high school. That point is about 1500-1800 ELO. Then you'll join a chess tournament and get your ass handed to you. There's a confidence you pick up at that point of knowing that you can win against almost every random stranger on the street, but also the humility that you can't actually beat any good chess players.

At that point, if you look at a beginner's book of chess puzzles, all of them will look really easy. But a book targeted towards grand masters of the game will leave you with the feeling that it's a book of practical jokes with no solutions. You learn that there are levels to "doing the impossible".

After hundreds of hours of study, you understand at a visceral level that effort can get you really, really far. If only other people put in this effort too, they can be pretty good. And then one day you play a ten-year-old prodigy. And you finally understand that one-in-a-million talent is truly oppressive. You're pretty good, but you'll never be an FIDE-recognized grand master. That kid will be one one day. Maybe. But even he will never beat Magnus Carlsen.

I don't know if we even have a word for this feeling. The simultaneous confidence of likely being able to beat every stranger you walk past, the humility in understanding that you're actually trash, and the compassion of understanding that if you don't have Magnus Carlsen's talent, maybe not everyone has your talent and they're not any lesser for it.

I'm sure you can read these words. I'm just not sure how well it can transfer from my head to yours if this feeling isn't already in your "experience library." And I can't imagine how to get this feeling into that library without first doing extremely well in some other analogous competitive deliberate practice game.

So, as this post describes, I think there's basically a skill of "being good at imagination", that makes it easier to (at least) extrapolate harder from your existing experience library to new things. A skilled imagineer that hasn't advanced in a competitive game, but has gained some other kind of confidence, or suffered some kind of "reality kick to the face", can probably extrapolate to the domain of competitive games.

But, also, part of the idea here is to ask "what do you actually need in order for the wisdom to be useful."

So, my challenge for you: what are some things you think someone will (correctly) do differently, once they have this combination of special qualia?

Curated! I was hesitant about this one for a while because Ray is staff, but I do think it ultimately is doing a quite impressive thing where it is at least a bit successfully communicating a whole worldview and perspective on the art of rationality, and that kind of content I think makes up a large fraction of the most impactful writing on LessWrong. 

It does have the problem of being long, and not necessarily keeping its pace all-throughout. I do recommend that readers feel free to skip sections and start skimming as they get bored. Maybe that's wrong, but it helped me.

I like the post a lot for being honest in a way that few other writing is. I think it in some contributes to its somewhat meandering nature, but I do feel like while reading the post my mind learns a bit more to conform itself to the shape of Ray's thoughts, which at the very least is helpful for modeling Ray, but also helpful in as much as Ray has learned at least some things that you the reader have not yet learned, or in as much as you are facing problems for which Ray's mind is better suited than your own, which I do think is probably true.

This is a fantastic post, immediately leaping into the top 25 of my favorite LessWrong posts all-time, at least. 

I have a concrete suggestion for this issue:

They end up spending quite a lot of effort and attention on loudly reiterating why it was impossible, and ~0 effort on figuring how they could have solved it anyway.

I propose switching gears at this point to make "Why is the problem impossible?" the actual focus of their efforts for the remainder of the time period. I predict this will consistently yield partial progress among at least a chunk of the participants.

I suggest thinking about the question of why it is impossible deliberately because I experienced great progress on an idea I had through exactly that mechanism, in a similar condition of not having the relevant physics knowledge. The short version of the story is that I had the idea, almost immediately hit upon a problem that seemed impossible, and then concluded it would never work. Walking down the stairs after right after having concluded it was impossible, I thought to myself "But why is it impossible?" and spent a lot of time following up on that thread. The whole investigation was iterations of that theme - an impossible blocker would appear, I would insist on understanding the impossibility, and every time it would eventually yield (in the sense of a new path forward at least; rarely was it just directly possible instead). As it stands I now have definite concrete angles of attack to make it work, which is the current phase.

My core intuition for why this worked:

  • Impossibility requires grappling with fundamentals; there is no alternative.
  • It naturally distinguishes between the problem the approach to the problem.
  • I gesture in the direction of things like the speed of light, the 2nd law of thermodynamics, and the halting problem to make the claim that fundamental limits are good practice to think about.

Yeah, a lot of my work recently has gone into figuring out how to teach this specific skill. I have another blogpost about it in the works. "Recursively asking 'Why exactly is this impossible?'"

One of the triggers for getting agitated and repeating oneself more forcefully IME is an underlying fear that they will never get it.

Shortly before finishing this post, I reread Views on when AGI comes and on strategy to reduce existential risk. @TsviBT notes that there are some difficult confrontation + empathy skills that might help communicating with people doing capabilities research. But, this "goes above what is normally called empathy."

It may go beyond what's normally called empathy, understanding, gentleness, wisdom, trustworthiness, neutrality, justness, relatedness, and so on. It may have to incorporate a lot of different, almost contradictory properties; for example, the intervener might have to at the same time be present and active in the most oppositional way (e.g., saying: I'm here, and when all is said and done you're threatening the lives of everyone I love, and they have a right to exist) while also being almost totally diaphanous (e.g., in fact not interfering with the intervened's own reflective processes)

He also noted that there are people who are working on similar abilities, but they aren't pushing themselves enough:

Some people are working on related abilities. E.g. Circlers, authentic relaters, therapists. As far as I know (at least having some substantial experience with Circlers), these groups aren't challenging themselves enough. Mathematicians constantly challenge themselves: when they answer one sort of question, that sort of question becomes less interesting, and they move on to thinking about more difficult questions. In that way, they encounter each fundamental difficulty eventually, and thus have likely already grappled with the mathematical aspect of a fundamental difficulty that another science encounters.

This isn't quite the same thing I was looking at in this post, but something about it feels conceptually related. I may have more to say after thinking about it more.

This post is very evocative, it touches on a lot of very relatable anxieties and hopes and "things most rationalists are frustrated they can't do better" type of stuff.

But its "useful or actionable content to personal anecdote" ratio seems very low, extremely low for a post that made the curated list. To me it reads as a collection of mini-insights, but I don't really see any unifying vision to them, anything giving better handles on why pedagogy fails or why people fail to learn from other people's wisdom.

It's too bad, because the list of examples you give at the start is fairly compelling. It just doesn't feel like the rest of the article delivers.

Hmm, I think I intended the post in a somewhat different way than what you were naively expecting to get out of it. Here are some notes, we'll see if they help:

1. The central claim of the post is "This is a complete list of the (deep) skills necessary to transfer wisdom from one person to another. If it seems incomplete or otherwise wrong, please say so." I think this is pretty big iff true! The primary of the post is not to convey "1-1 wisdom transfer" in one read, it's to say "here is the blueprint for how we could build a wisdom engine – either it needs all these pieces, or, you need to make additional innovations that streamline part of the process." 

(i.e. the most important goal of the post is to convey conceptual progress, rather than a practical blueprint)

2. That said, the post is trying to do it's best to convey the wisdom of "the sort of mindset I think you need to do 1-1 wisdom transfer." The entire problem with wisdom is that it's not easily compressed into a bitesize chunk one can easily digest – if you could, it wouldn't be the sort of thing that ends up getting called wisdom. 

Several of the skills the post references have accompanying, existing blogposts. Many of the links in this post goes to another post that attempts to convey one of the deep skills that are (I claim/hope) prerequisites for wisdom transfer. (Some of the skills don't have a full post associated with them, because I don't know of a good one. Those are TODO items for future me or other thinkers/writers trying to complete the work here)

The novel claim being made in this post is "If you were to fully absorb the skills listed here, you don't merely gain each individual skill. If you weave them together the way this post illustrates, I claim you can get an advanced skill." If you want that skill, you need to go pursue each of those individual subskills, and then maybe come back and re-read this post. Hopefully, you think each of the individual skills are worth gaining anyway, so it's not like you're betting on a moonshot that only pays off if my claim is true. (which it's reasonable to be skeptical of)

3. Each of the personal anecdotes are not just meant to be "here's a thing that happened to me." They are the instructions, and, (attempted) mechanism of wisdom transfer. The way you normally gain wisdom is by experiencing a long, meandering series of setbacks and achievements. The central thrust of this post is that it is possible to gain wisdom without literally experiencing that long meandering road, via evocative storytelling and storylistening (if you have all the requisite subskills). 

The anecdotes are meant to convey "These are the mental moves that Ray made, in order to make the progress (he thinks) he made. And, here are various little qualias that happened along the way, and the mechanisms he thinks allowed them to weave together, such that you can maybe absorb some of the generator here."

(that is: the post is meant to be an example of itself. The post is long, but, like, I think it's potentially a lot shorter than the ~14 years it took to get me to the point of generating the post myself)

...

I will probably at some point write a shorter post that's like "here's the central claim of Subskills of Listening to Wisdom, without the meandering story", for people who aren't bought in enough to read this long thing. But, that post wouldn't convey the generators, which are the important part. 

I probably don't expect that this comment helped that much, but, I am curious, if you (PoignardAzur) set about to say "okay, is there a wisdom I want to listen to, or convey to a particular person? How would I do that?", where precisely do you get stuck, or where does the post feel like it fails you? I'm down to try to articulate some more specific bits that help you that I didn't succeed in conveying to you through the post itself.

I don't really know what to tell you. My mindset basically boils down to "epistemic learned helplessness", I guess?

It's like, if you see a dozen different inventors try to elaborate ways to go to the moon based on Aristotelian physics, and you know the last dozen attempts failed, you're going to expect them to fail as well, even if you don't have the tools to articulate why. The precise answer is "because you guys haven't invented Newtonian physics and you don't know what you're doing", but the only answer you can give is "Your proposal for how to get to the moon uses a lot of very convincing words but the last twelve attempts used a lot of very convincing words too, and you're not giving evidence of useful work that you did and these guys didn't (or at least, not work that's meaningfully different from the work these guys did and the guys before them didn't)."

And overall, the general posture of your article gives me a lot of "Aristotelian rocket" vibes. The scattershot approach of making many claims, supporting them with a collage of stories, having a skill tree where you need ~15 (fifteen!) skills to supposedly unlock the final skill, strikes me as the kind of model you build when you're trying to build redundancy into your claims because you're not extremely confident in any one part. In other words, too many epicycles.

I especially notice that the one empirical experiment you ran, trying to invent tacit knowledge transfer with George in one hour, seems to have failed a lot more than it succeeded, and you basically didn't update on that. The start of the post says:

"I somehow believe in my heart that it is more tractable to spend an hour trying to invent Tacit Soulful Knowledge Transfer via talking, than me spending 40-120 hours practicing chess. Also, Tacit Soulful Transfer seems way bigger-if-true than the Competitive Deliberate Practice thing. Also if it doesn't work I can still go do the chess thing later."

The end says:

That all sounds vaguely profound, but one thing our conversation wrapped up without is "what actions would I do differently, in the worlds where I had truly integrated all of that?"

We didn't actually get to that part.

And yet (correct me if I'm wrong) you didn't go do the chess thing.

Here are my current guesses:

No! Don't!

I'm actually angry at you there. Imagine me saying rude things.

You can't say "I didn't get far enough to learn the actual lesson, but here's the lesson I think I would have learned"! Emotionally honest people don't do that! You don't get to say "Well this is speculative, buuuuuut"! No "but"! Everything after the "but" is basically Yudkowsky's proverbial bottom line.

if you (PoignardAzur) set about to say "okay, is there a wisdom I want to listen to, or convey to a particular person? How would I do that?"

Through failure. My big theory of learning is that you learn through trying to do something, and failing.

So if you want to teach someone something, you set up a frame for them where they try to do it, and fail, and then you iterate from there. You can surround them with other people who are trying the same thing so they can compare notes, do post-mortems with them, etc (that's how my software engineering school worked), but in every case the secret ingredient is failure.

I agree with ‘failure being an important part of learning’. If you end up trying to do something somewhere in this vicinity and failing I am happy to take a stab at helping. 

I think it’s quite reasonable to have the epistemic state of “seems like I should treat this as bogus until I get more evidence”, esp. if you don’t have anywhere close to the prerequisite skills such that ‘solidify 1-2 more skills and then try actually doing the final Thing’ doesn’t feel within spitting distance.

To be clear, I think you should treat this as bogus until you have evidence better than what you listed.

You're trying to do a thing where, historically, a lot of people have had clever ideas that they were sincerely persuaded were groundbreaking, and have been able to find examples of their grand theory working, even though it didn't amount to anything. So you should treat your own sincere enthusiasm and your own "subjectively it feels like it's working" vibes with suspicion. You should actively be looking for ways to falsify your theory, which I'm not seeing anywhere in your post.

Again, I note that you haven't tried the chess thing.

Okay while I disagree with a bunch of your framing here, I do think "look for ways to falsify" is indeed important and not really how I was orienting to it, and should be at least a part of how I'm orienting to it.

The way I was orienting to it was "try it, see how well it works, iterate, report honestly how it went." (Fwiw this is more like a side project for me, and my mainline rationality development work is much more oriented to "design the process so it's easier to evaluate and falsify.")

I will mull over "explicitly aim to falsify" here, although I think this is the sort of question where that's actually pretty hard. I think most forms of self-improvement are hard to justify scientifically, take a long time to pay off, and effects are subtle and intertwined enough that it's hard to tell what works. 

I don't see offhand a better approach than "try it a bunch and see if it seems to work, and eventually give up if it doesn't.

(FYI the practical place I'm most focused on testing this out is in teaching junior programmers "debugging" and "code taste". I don't think that can result in data that should persuade you, or even that should particularly change your mind that it should have persuaded me)

...

I do think I have deep disagreements about orientation here, where I think a lot of early stage development of rationality ideas requires going out on a limb, it'll be years before it's really clear whether it works or not. You can do RCTs, but they are very expensive and don't make sense until you basically already know it works because you have large effect sizes and you want to justify it to skeptics.

I know some rationality training developers who have been extremely careful about not posting things until they're confident they're real, and honestly I think that attitude had more downsides than upsides (I think it basically killed LessWrong-as-a-place-where-people-do-rationality-development). I think it is much better to post your updates as you do them, with appropriate epistemic caveats.

...

I do think the claim here is... actually just not very unreasonable? 

The claim here is:

You can deliberately try to learn why a person has a made a subtle update. 

If they are good at explaining or you are good at interviewing, you can learn the details about what you'd do differently if you yourself made the update, and ask yourself whether the update actually makes sense. 

It may help to imagine more viscerally the situations that caused the person to make the update.

I would be extremely surprised if this didn't help at all. I wouldn't be too surprised if it turns out to never be as good as actually burning out / playing chess for 200 hours / etc. 

If they are good at explaining or you are good at interviewing, you can learn the details about what you'd do differently if you yourself made the update, and ask yourself whether the update actually makes sense.

I would be extremely surprised if this didn't help at all.

I wouldn't be very surprised. One, it seems coherent with what the world looks like.

Two, I suspect for the kind of wisdom / tacit knowledge you want, you need to register the information in types of memory that are never activated by verbal discussion or visceral imagining, by design.

Otherwise, yeah, I agree that it's worth posting ideas even if you're not sure of them, and I do appreciate the epistemic warning at the top of the post.

I'm a 20 year old who perceives myself as the kind of young founder you're probably talking to in this post. And I've noticed a lot of older guys have similar sentiments to you about younger guys and the perspective often annoys me. I do everything I can to learn from other people, but in the context of giving and receiving advice I believe that a lot of information is typically not considered. For example, you talk about a lot of mistakes younger people make that could be easily avoided if they had the older generation's wisdom, but as conveyed by this post, your knowledge/skillset of communicated complex ideas in a way that can be understood by someone with different knowledge/experiences appears limited to me. Additionally, I've written over a thousand documents and am thinking every hour of every day from the perspectives that I value and want to improve in. Someone who doesn't have any context with which to understand me or even what I want can seldom give good advice aimed at observed circumstances in my opinion. The best advise I've received tends to be advice not directed at me because of reasons like this.


I don't believe burnout is real. I have theories on why people think it's real, but I think the phenomenon people label as burnout is more complex than people understand, and advice about burnout is consequently not very helpful. I know you've said you don't typically try to argue/explain yourself around this belief, but if I'm wrong and you have some insight that can only be gained with experience I'm not privy to, I would be thankful if you would correct my false belief. 
 

I agree with most of what I've read in this post, I mainly disagree with some of the perspectives you take.
 


I read through most of the beginning of this post. Then started skimming by my perception of header relevance to the original idea of the post. I'm not really sure what goal you were trying to achieve by branching off into so many different topics in a single post instead of creating separate posts, but I'm still pretty green to the LessWrong community and the norms here, so maybe you can enlighten me. I'm also very biased towards practical texts, so I believe I just am not the target audience for most of this post. I liked the statements about trigger#1 and trigger #2 in the practical section, and its given me some insightful tips I'd never thought of on how to be a better listener. I recently made a note to myself that people say what they mean, so take them literally instead of translating their words into something you already understand. Admittedly, I have not been following my own advice. And this post has served as a valuable reminder for me. I'm really interested in communicated and learning about complex, soulful ideas so maybe you could direct me to some good practical posts on the topic. I've had to navigate this skillset entirely alone, so I'd rather not reinvent the wheel if someone has already publicized their work around this.
 

I don’t believe burnout is real. I have theories on why people think it’s real

More interesting would be to hear why you don’t think it’s real. (“Why do people think it’s real” is the easiest thing in the world to answer: “Because they have experienced it”, of course. Additional theorizing is then needed to explain why the obvious conclusion should not be drawn from those experiences.)

I will say I think there are a few different things people mean by burnout, but, they are each individually pretty real. Three examples that come to mind easily:

"Overworked" burnout

If I've been working 60 hour weeks for months on end, eventually I'm just like "I can't do this anymore." My brain gets foggy. I feel exhausted. My body/mind start to rebel at the prospect of doing of more of that type of work.

In my experience, this lasts 1-3 weeks (if I am able to notice and stop and switch to a more relaxed mode). When I do major projects, I have a decent sense of when Overworked Burnout is coming, and I time the projects such that I work up until my limit, then take a couple weeks to recover.

"Overworked + Trapped" burnout. 

As above, except for some reason I don't have the ability to stop – people are depending on me, or future me is depending on me, and if I were to take a break a whole bunch of projects or relationships would come crashing down and destroy a lot of stuff I care about.

Something about this has a horrible coercive feeling that is qualitatively different being tired/overworked. Some kind of "sick to my stomach", want to curl up and hide but you can't curl up and hide. This can happen because your boss is making excessive demands on you (or firing you), or simply because I volunteered myself into the position. Each of those feels differently bad. The former because you maybe really can't escape without losing resources that you need. The latter because if I've put myself in this situation, than something about my self-image and how others will relate to me will have to change if I were to escape.

"Things are deeply fucked burnout." 

This feels similar to the Overworked+Trapped but it's some other kind of trapped other than just "needing to put in a lot of hours." Like, maybe there's conflict at work, or in a close relationship, and there are parts of it you can't talk about with anyone, and the people you can easily talk about it with have some perspective that feels wrong to you and it's hard to hold onto your own sense of sanity. 

In some (many?) cases the right move here is to walk away, but that might be hard either because you need money/resources from the group, or you've invested so much of your identity into it that letting go requires reorganizing how you conceptualize yourself and your goals and your social scene.

This can cause a number of things other than burnout, i.e. various trauma responses. But I think a "burnout" flavored version of it can come when you have to live in this state for months or years. I haven't had this quite happen to me, but the people who've had "conflict based burnout" or "no longer really believe in their job/mission/relationship" flavor burnout can leave people struggling to do much-of-anything on purpose for months.

I'm not really sure what goal you were trying to achieve by branching off into so many different topics in a single post instead of creating separate post

 

I think in my ideal world this was a series of blogposts that I actually expected people to read all of. Part of the reason it's all one post is that I didn't expect people to reliably get all of them.

Partly, I think each individual piece is necessary. Also, kind of the point of pieces like this are to be sort of guided meditations on a topic that let you sit with it long enough, and approach it from enough different angles, that a foreign way of thinking has time to seep into your brain and get digested.

I expected people would mostly not believe me without the concrete practical examples, but the concrete examples are (necessarily) meandering because that's what the process was actually like (you should expect the process of transmitting soulful knowledge to feel some-kind-of-meandering, at least a fair amount of the time).

I wanted to make sure people got the warnings at the same time that they got the "how to" manual – if I separated the warnings into a separate post, people might only read the more memetically successful "how to" posts.

I do suspect I could write a much shorter version that gets across the basic idea, but I don't expect the basic idea to actually be very useful because each of the 20 skills is pretty deep, and conveying what it's like to use them all at once is just necessarily complicated.

Still working my way through this post. But this section gets me excited!

If the Receiver or Giver has high enough skills in one area, they can probably compensate for the other having lower skills, although there's probably some minimum threshold needed for each.

It conjures the image of a future occupation. A conduit. Someone skilled at giving and receiving. Brought in specifically to speed up this type of knowledge pass over between two people.

Well to be honest in the future there is probably mostly an AI tool that just beams wisdom directly into your brain or something.

I really enjoyed this post, and thought it was one of the better ones that I have read on Lesswrong.  A lot of good material to consider.  Thanks.

I'm also excited because, while I think I have most of the individual subskills, I haven't personally been nearly as good at listening to wisdom as I'd like, and feel traction on trying harder.

 

Great post! I personally have a tendency to disregard wisdom because it feels "too easy", that if I am given some advice and it works I think it was just luck or correlation, then I have to go and try "the other way (my way...)" and get a punch in the face from the universe and then be like "ohhh, so that why I should have stuck to the advice". 

Now when I think about it, it might also be because of intellectual arrogance, that I think I am smarter than the advice or the person that gives the advice. 

But I have lately started to think a lot about way we think that successful outcomes require overreaching and burnout. Why do we have to fight so hard for everything and feel kind of guilty if it came to us without much effort? So maybe my failure to heed advices of wisdom is based in a need to achieve (overdo, modify, add, reduce, optimize etc.) rather than to just be.

First and foremost, it was quite an interesting post and my goal of the comment is to try to connect my own frame of thinking with the one presented here. My main question is about the relationship between emotions/implicit thoughts and explicit thinking.

My first thought was on the frame of thinking versus feeling and how these flow together. If we think of emotions as probability clouds that tell us whether to go in one direction or another, we can see them as systems for making decisions in highly complex environments, such as when working on impossible problems.

I think something like research taste is exactly this - highly trained implicit thoughts and emotions. Continuing from something like tuning your cognitive systems, I notice that this is mostly done with System 2 and I can't help but feel that it's missing some System 1 stuff here.

I will give an analogy similar to a meditation analogy as this is the general direction I'm pointing in:

If we imagine that we're faced with a wall of rock, it looks like a very big problem. You're thinking to yourself, "fuck, how in the hell are we ever going to get past that thing?"

So first you just approach it and you start using a pickaxe to hack away at it, you make some local progress yet it is hard to reflect on where to go. You think hard, what are the properties of this rock that allows me to go through it faster?

You continue yet you're starting to feel discouraged as you're not making any progress, you think to yourself "Fuck this goddamn rock man, this shit is stupid."

You're not getting any feedback since it is an almost impossible problem.

Above is the base analogy, following are two points on the post from this analogy:

1.
Let's start with a continuation to the analogy, imagine that your goal, the thing behind huge piece of rock is a source of gravity and you're water. 

You're continuously striving towards it yet the way that you do it is that you flow over the surface. You're probing for holes in the rock, crevices that run deep, structural instability in the rock yet you're not thinking - you're feeling it out. You're flowing in the problem space, allowing implicit thoughts and emotions guide you and from time to time you make a cut. Yet your evaluation loop is a lot longer than your improvement loop. It doesn't matter if you haven't found anything yet because gravity is pulling you in that direction and if you succeed is a question of finding the crevice rather than your individual successes with your pickaxe. 

You apply all the rules of local gradient search and similar, you're not a stupid fluid yet you're fine with failing because you know it gives you information about where the crevice might be, and it isn't until you find it that you will make major progress.

2.
If you have other people with you then you can see what others are doing and check whether your strategies are stupid or not. They give you an appropriate measuring stick for working on an impossible problem. You may not know how well you're doing in solving the problem but you know your relative rating and so you can get feedback through that (as long as it is causally related to the problem you're solving).

 

What are your thoughts on the trade-off between emotional understanding and more hardcore system 2 thinking? If one applies the process above, do you think there's something that is missed out? 


 

My overall frame is it's best to have emotional understanding and system 2 deeply integrated. How to handle local tradeoffs unfortunately depends a lot on your current state, and where your bottlenecks are.

Could you provide a specific, real-world example where the tradeoff comes up and you're either unsure of how to navigate it, or, you think I might suggested navigating it differently?

Yeah sure!

So, I've had this research agenda into agent foundations for a while which essentially mirrors developmental interpretability a bit in that it wants to say things about what a robust development process is rather than something about post-training sampling. 

The idea is to be able to predict "optimisation daemons" or inner optimisers as they arise in a system.

The problem that I've had is that it is very non-obvious to me what a good mathematical basis for this is. I've read through a lot of the existent agent foundations literature but I'm not satisfied with finite factored sets nor the existing boundaries definitions since they don't tell you about the dynamics. 

What I would want is a dynamical systems inspired theory of the formation of inner misalignment. It's been in my head in the background for almost 2 years now and it feels really difficult to make any progress, from time to time I have a thought that brings me closer but I don't usually make it closer by just thinking about it. 

I guess something I'm questioning in my head is the deliberate practice versus exploration part of this. For me this is probably the hardest problem I'm working on and whilst I could think more deliberately on what I should be doing here I generally follow my curiosity, which I think has worked better than deliberate practice in this area?

I'm currently following a strategy where this theoretical foundation is on the side whilst I build real world skills of running organisations, fundraising, product-building and networking. I then from time to time find some gems such as applied category theory or Michael Levin's work on Boundaries in cells and Active Inference that I find can really help elucidate some of the deeper foundations of this problem. 

I do feel like I'm floating more here, going with the interest and coming back to the problems over time in order to see if I've unlocked any new insights. This feels more like flow than it does deliberate practice? Like I'm building up my skills of having loose probability clouds and seeing where they guide me?

I'm not sure if you agree that this is the right strategy but I guess that there's this frame difference between a focus on the emotional, intuition or research taste side of things versus the deliberate practice side of things?

Nod. So, first of all: I don't know. My own guesses would depend on a lot of details of the individual person (even those in a similar situation to you).

(This feels somewhat outside the scope of the main thrust of this post, but, definitely related to my broader agenda of 'figure out a training paradigm conveying the skills and tools necessary to solve very difficult, confusing problems')

But, riffing so far. And first, summarizing what seemed like the key points:

  • You want to predict optimization daemons as they arise in a system, and want a good mathematical basis for that, and don't feel satisfied with the existing tools.
  • You're currently exploring this on-the-side while working on some more tractable problems.
  • You've identified two broad-strategies, which are:
    • somehow "deliberate practice" this,
    • somehow explore and follow your curiosity intermitently

Three things I'd note:

  • "Deliberate practice" is very openended. You can deliberate practice noticing and cultivating veins of curiosity, for example.
  • You can strategize about how to pursue curiosity, or explore (without routing through the practice angle)
  • There might be action-spaces other than "deliberate practice" or "explore/curiosity" that will turn out to be useful.

My current angle for deliberate practice is to find problem sets that feel somehow-analogous to the one you're trying to tackle, but simpler/shorter. They should be difficult enough that they feel sort of impossible while you're working on them, but, also actually solvable. They should be varied enough that you aren't overfitting to one particular sort of puzzle.

After the exercise, apply the Think It Faster meta-exercise to it.

Part of the point here is notice strategies like "apply explicit systematic thinking" and strategies like "take a break, come back to it when you feel more inspired", and start to develop your own sense of which strategies work best for you.