PeterBorah

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

This reply is extremely late, but I'm annoyed at myself for not having responded at the time, so I'll do it now in case anyone runs across this at some point in the future:

I guess I feel a little trepidation or edge-of-my-seat feeling when I first run a test (I have surprisingly often ended up crossing my fingers), but I try to write tests in a nice modular way, so that I'm never writing more than ~5-10 lines of code before I can test again. I feel a lot more trepidation when I break this pattern, and have a big chunk of new code that hasn't been tested at all yet.

I've had a similar experience. IDC was by far my favorite technique at CFAR, and I've maybe done it twice since then? I think some of it is that the formal technique fell away pretty quickly for me: once I learned to pay attention to other internal voices, I found it pretty natural to do that all the time in the flow of my normal thinking, and setting aside structured time for it felt less necessary. (And when I do set aside larger chunks of time, I usually end up just inhabiting the part that gets less "airtime" for a while, rather than having an explicit dialogue between it and another part.)

As a separate comment since it feels like a pretty different thread:

I do have a vague hypothesis that the very first part of the Looking skill might be a prerequisite for IDC and frankly a lot of CFAR techniques. I don't think you need a lot of it, but there feels like there's a first insight that makes further conversations about things downstream from it a million times easier. (For programmers: it feels similar to whatever the insight is that separates people who just can't get the concept of a function from people who can.) It annoys me a lot that I don't yet have a consistent tool for helping people quickly get the first skillpoint in Looking, and fixing that is one of my top pedagogical priorities at the moment.

For me at least, the multiple agents framework isn't the natural, obvious one, but rather a really useful theoretical frame that helps me solve problems that used to seem insoluble. Something like how it becomes much easier to precisely deal with change over time once you learn calculus. (As I use it more, it becomes more intuitive, again like calculus, but it's still not my default frame.)

Before I did my first CFAR workshop, I had a lot of issues that felt like, "I'm really confused about this thing" or "I'm overwhelmed when I try to think about this thing" or "I know the right thing to do but I mysteriously don't actually do it". The CFAR IDC class recommended I model these situations as "I have precise and detailed beliefs and desires, I just happen to have many of them and they sometimes contradict each other." When I tried out this framework, I found that a lot of previously unsolvable problems became surprisingly easy to solve. For example, "I'm really torn about my job" became, "I am really excited about precisely this aspect of my job, and really unhappy about precisely this aspect". Then it's possible to adjudicate between those two perspectives, find compromises or collaborations, etc.

It would be rude of me to assume that your mind works the same as mine, so take the following strictly as a hypothesis. But I would guess that what's going on for you is that you identify really strongly with one set of preferences/desires/beliefs in your mind, and experience other preferences/desires/beliefs as "pain, pleasure, stupidity, and ignorance". The experiment this suggests is to try spending a few minutes pretending those things are the "real you", and the "agenty" part is the annoying external interloper caused by corrupted hardware. If I'm right, the sign would be that you find there is some detail and coherence to the "identity" of those things that feel like flaws, even if you're not sure it's an identity you approve of.

Note that I don't think the multiple agents thing is the one true ontology. I find that as I learn to integrate the parts better, they start feeling more like a single working system. But it's a really helpful theoretical tool for me.

It definitely doesn't take years of practicing meditation. Though I'm hesitant to speculate on how long it would take on average, because how prepared for the idea people are varies a lot. The hardest step is the first one: realizing that people are talking about things you don't yet understand.

Hmm, maybe this is part of the motivation for test-first programming? Since I was originally trained to do test-first, I don't have this problem, because there are always already tests before I write any code. And I pretty much always know my code works, because it wouldn't be done if the tests weren't passing yet.

I've stuck to no fiction. (I unthinkingly read a few paragraphs of a short story that came across my Twitter, but otherwise have been consistent.)

It's mostly been fairly easy. It's really obvious now that it's a social pica. I think some of the time I would have spent on it has been going to increased use of LessWrong and Facebook, which are also social picas, but those are both more genuinely social, and harder to lose 8 hours at a time to.

There was at least one night where I was pretty unhappy, and didn't have access to any actual friends to spend time with, and really wanted to lose myself in a book. I probably think that ordinarily it would have been an ok thing to do as a coping mechanism, but it was useful to observe how badly I needed the coping mechanism. That makes it obvious how much I need the real thing.

There are also a couple things I'm genuinely looking forward to reading when Lent is over. (Murphy's Quest, for one.) But I'd say those things are probably ~1/4 or less the amount of fiction I would have read this month without Lent.

This has been an especially exciting/productive/momentum-filled month for me. This probably makes it easier than normal to not read fiction. Though maybe there's some causality the other direction as well?

I'm still not 100% sure I understand Val's definition of Looking, so I'm not quite willing to commit to the claim that it's the same as Kaj's definition. But I do think it's not that hard to square Kaj's definition with those quotes, so I'll try to do that.

Kaj's definition is:

being able to develop the necessary mental sharpness to notice slightly lower-level processing stages in your cognitive processes, and study the raw concepts which then get turned into higher-level cognitive content, rather than only seeing the high-level cognitive content.

Everything you experience, no matter the object, is experienced via your own cognitive processes. When you're doing math, or talking to a friend, or examining the world, that is an experience you are having, which is being filtered by your cognitive processes, and therefore to which the structure of your mind is relevant.

As Kaj describes, the part of your thought processes you normally have conscious access to are a tiny fragment of what is actually happening. When you practice the skill of making more of it conscious and making finer and finer discriminations in mental experience, you find that there is a lot of information that your conscious mind would normally skip over. This includes plenty of information about "the world".

So consider the last quote as an example:

A while back I was interacting with a friend of a friend (distant from this community). His demeanor was very forceful as he pushed on wanting feedback about how to make himself more productive. I felt funny about the situation and a little disoriented, so I Looked at him. My sense of him as an experiencing being deepened, and I started noticing sensations in my own body/emotion system that were tagged as "resonant" (which is something I've picked up mostly from Circling). I also could clearly see the social dynamics he was playing at. When my mind put the pieces together, I got an impression of a person whose social strategies had his inner emotional world hurting a lot but also suppressed below his own conscious awareness. This gave me some things to test out that panned out pretty on-the-nose.

A fictionalized expansion of that, based on my experiences, might be:

"I was running my usual algorithms for helping someone, but I felt funny about the situation and a little disoriented. In the past I would have just kept trying, or maybe just jumped over to a coping mechanism like trying to get out of the situation. However, I had enough mental sharpness to notice the feeling as it arose, so instead I decided to study my experience of the situation. Specifically, I tried to pay attention to how my mind was constructing the concept of "him". (Though since my moment-to-moment experience doesn't distinguish between "him" and "my concept of him", and since I have no unmediated access to the "him" that is presumably a complex quantum wavefunction, that mental motion might better be described as just "paying attention to my experience of him", or even "paying attention to him".) When I did that, I was able to see past the slightly dehumanizing category I was subconsciously putting him in, and was able to pick up on the parts of my mind that were interacting with him on a more human, agent-to-agent level. I was able to notice somatic markers in my body that were part of a process of modeling and empathizing with him, from which I derived both more emotional investment in him and also more information about the social dynamics of the situation, as processed by my system 1, which my conscious mind had been mostly ignoring. I was able to use all of this information to put together an intuitively appealing story about why he was acting this way, and what was going on beneath the surface. This hypothesis immediately suggested some experiments to try, which panned out as the hypothesis predicted."

Load More