To make this clearer, imagine something happens (event Y) that changes your opinion of X. Maybe you realise that the thing 'shouldn't' be much evidence at all. You decide it will only slightly affect your opinion. You may end up with a certain conclusion by looking at old evidence and this thing Y (that you have actively decided to only weight a small amount). Then, something may then happen that makes you really believe that thing Y is small evidence. Suddenly, you are looking at the same evidence and this thing Y (weighted the same amount as you tried to weight it before), but you have a different conclusion. It's like you are looking at the evidence through a different 'lens'. It seems like you are able to exploit uncertainty around evidence to get differing conclusions depending on this 'lens'. This whole thing might make you feel large internal distrust.
Can you give an illustrative example?
A related example is my "On insecurity as a friend": I'd on some level bought into messages saying that confidence is always good and insecurity is always bad, so when I had feelings of insecurity, I treated them as a sign of irrationality to override.
What I didn't realize was that I had those feelings because of having screwed up socially in the past, and that they were now correctly warning me about things which might again have bad consequences. Just trying to walk over them meant that I was ignoring important warnings, sometimes causing things to blow up in my face. What made the feelings easier to deal with was when I started actually taking them seriously as hypotheses to consider. After that, the relevant part could tone down the intensity of the social anxieties, as shouting so loudly as to force me to withdraw socially wasn't the only way that it could make itself heard.
Along the same lines as TurnTrout, I was wondering about the abstraction versus specific situation. I am not asking that any share anything they would not be comfortable with. However, I do think abstraction from oneself in the analysis can just be another one of the protection mechanisms that can be used to allow us to appear to be making progress while while still avoiding the underlying truth driving our behaviors.
That said, I think Sara offers some very good items to consider.
Okay, this next bit is not directly related but seems implicit in the posting, and other posts I've read here. Does the LW community tend to see the human mind and "person" as a collection of entities/personalities/agents/thinking processes? Or am I jumping to some completely absurd conclusion on that?
Okay, this next bit is not directly related but seems implicit in the posting, and other posts I've read here. Does the LW community tend to see the human mind and "person" as a collection of entities/personalities/agents/thinking processes? Or am I jumping to some completely absurd conclusion on that?
There are some LWers who think that way, and others who don't. (Among the people who find it a useful model, AFAICT it's usually treated more as a hypothesis to consider and/or fake-framework that is sometimes useful. This sequence is a fairly comprehensive introduction)
There are also plenty of LWers who don't buy it.
Thinking in terms of internal parts is a mental model that a good portion of the LW community that's interested in self-improvements techniques can use. You need it for the Internal Double Crux technique that CFAR teaches.
Yet, it's not the only model out there. I personally rather do a version of Leverage's belief reporting that assumes I as a whole either hold a belief or don't then doing parts work if I believe that a specific belief that I can identify is the issue.
As far as abstraction goes, I think it's a key feature for self-introspection. If you are mentally entangled with the part that you are introspecting you won't see it clearly.
A lot of meditation is about reaching a mental state where you can look at your thoughts without being associated with them.
Does the LW community tend to see the human mind and "person" as a collection of entities/personalities/agents/thinking processes?
In the extreme form you asked about:
I'm skeptical/unclear on how it would be empirically tested.
Not aiming for IIT, but seeing how it could be true:
Rather than holding all the relevant information in our minds at once, the benefit of taking time to think is not necessarily explicit focus/consideration, but unconscious/thinking about other things and then making connections, because it takes time to reload the mental workspace in order to visit all relevant areas.
EDITED: for brevity and clarity.
The thing that changed and allowed me to actually start updating more efficiently was that I actually started believing that all parts of me are pretty smart. I started believing this because I started actually listening to myself and realised that these parts of me weren’t saying the ‘obviously wrong’ things I thought they were saying.
Yeah this is huge. I've had some similar insights myself the last few months and I now think it's one of the most important things that people can do. Which of course requires listening to the parts of you that think the other parts are stupid or silly, as well! And the parts of you that think that thinking about yourself as having parts is weird. Etc.
My new mantra for this is: May I integrate everything that wants to be integrated, as it wants to be integrated.
The thing that changed and allowed me to actually start updating more efficiently was that I actually started believing that all parts of me are pretty smart. I started believing this because I started actually listening to myself and realised that these parts of me weren’t saying the ‘obviously wrong’ things I thought they were saying.
Yeah this is huge. I've had some similar insights myself the last few months and I now think it's one of the most important things that people can do. Which of course requires listening to the parts of you that think the other parts are stupid or silly, as well! And the parts of you that think that thinking about yourself as having parts is weird. Etc.
My new mantra for this is: May I integrate everything that wants to be integrated, as it wants to be integrated.
This rings really true with my own experiences; glad to see it written up so clearly!
I think that lots of meditation stuff (in particular The Mind Illuminated) is pointing at something like this. One of the goals is to train all of your subminds to pay attention to the same thing, which leads to increasing your ability to have an intention shared across subminds (which feels related to Romeo's post). Anyway, I think it's really great to have multiple different frames for approaching this kind of goal!
Actually updating can be harder than it seems. Hearing the same advice from other people and only really understanding it the third time (though internally you felt like you really understood the first time) seems inefficient. Having to give yourself the same advice or have the same conversation with yourself over and over again also seems pretty inefficient. Recently, I’ve had significant progress with actually causing internal shifts, and the advice.. Well, you’ve probably heard it before. But hopefully, this time you’ll really get it.
Signs you might not be actually updating.
Plausible hypotheses
Plausible hypothesis 1: Some things take longer to digest than other things. Maybe you just need time to actually update models.
Plausible hypothesis 2: If you change a fundamental node in your ‘belief network’, it can be hard to change patterns of behaviour and reactions. You might not believe thing X but behave like you think thing X because you are mostly working on auto-pilot and habits are hard to break out of. This is especially salient when a piece of actual behaviour is ‘far away’ from the node that has been changed (so that it seems unrelated at a glance).
Plausible hypothesis 3: A lot of the things people are trying to teach you are ‘purple knowledge’. This may mean you may just need lots of gesturing at a thing, or to develop a certain intuition before a certain thing actually makes sense.
I think it’s likely these hypotheses play at least some role in what is happening. However, in my case something else was playing a larger role.
What was going wrong for me
The hypothesis that seems right for my situation: I was not really listening to some parts of me. In an attempt to listen to all parts of me, I was doing a few things that would cause the process to fail:
The thing that changed and allowed me to actually start updating more efficiently was that I actually started believing that all parts of me are pretty smart. I started believing this because I started actually listening to myself and realised that these parts of me weren’t saying the ‘obviously wrong’ things I thought they were saying. I began to stop just listening to experts and going ‘what they are saying makes sense’ and started having conversations where I just entirely let the part of me that disagreed say all the reasons it disagreed and ‘fight’ the expert. I allowed that part of me to have contact with the world and this meant that part of me could learn. And it worked.
This whole post is something you’ve probably heard before - “listen to all parts of you”, “don’t write the final line”, etc. None of this stuff was new to me, and yet, it feels like a lesson I’ve just learnt. I hope you let the part of you that might think this is all wrong ‘fight’ with me. And hopefully that will cause one of us to actually update towards the truth.