by Ziz
1 min read

3

Schelling place for comments is here on LessWrong.

New Comment
4 comments, sorted by Click to highlight new comments since:

I wish there was an example here. I think the algorithm you're pointing to is something like:

  1. Find areas where your endorsed beliefs and aliefs diverge.
  2. Mentally contrast both in your head, and feel the structural tension and dissonance this creates. (not sure if you're bouncing between both here, or overlaying them on top of one another and then viewing simultaneously).
  3. See what both of them would have predicted in the past, and notice which one is more true. Grok this so that whichever one is wrong updates.
  4. Follow the beliefs along their belief chains/regulator chains, find further beliefs, and repeat steps 1-3.

Is that roughly what you're trying to describe? Am I emphasizing the proper parts?

I'll note that one thing I love about step #3 is that it's asymetric to true beliefs. Other belief change techniques I know like the Lefkoe belief process or reframing instead ask you to imagine how your beliefs could be wrong, which is very effective for getting rid of them but says nothing about their validity.

[-]Ziz30

Nope. That's just a process this thing is calling into. See this for more info on the context for this technique. (And the primary use case for this is where neither of the belifs/aliefs is wrong, and you end up grokking they are separate instead.)

Ahh I see, so the important thing I was missing is something like "This is about disentangeling social reality from predictive reality?"

I'd go a step further and say "this is about disentangling how to make useful predictions about social reality from how to make useful predictions about non-social reality."