In a nutshell, Applied Rationality is figuring out good actions to take towards a goal. Going meta, by questioning whether the goal itself is good, is necessary and useful. But navigating the existential pitfalls that come with this questioning can feel like a waste of time.
How do you balance going meta with actually doing the work?
Location
Enter the Mars Atrium via University Avenue entrance. We'll meet in front of the CIBC Live Lounge (see picture), which is in the atrium on the ground floor. I'll be wearing a bright neon windbreaker. We'll loiter there until 14:30 and then head somewhere comfier depending on how many people show up.
Reading
An abridged post where David Chapman frames the problem, it's importance and it's common causes of frustration, but offers no solutions.
Please recommend/bring other readings.
Posted on:
Would it be okay to start some discussion about the David Chapman reading in the comments here?
Here's some thoughts that I had while reading.
When Einstein produced general relativity, the success criteria was "it produces Newton's laws of gravity as a special case approximation". I.e. it had to produce the same models as have already been verified as accurate to a certain level of precision.
If more rationality knowledge produces depression and otherwise less stable equilibria within you, then that's not a problem with rationality. Quoting from a lesswrong post: We need the word 'rational' in order to talk about cognitive algorithms or mental processes with the property "systematically increases map-territory correspondence" (epistemic rationality) or "systematically finds a better path to goals" (instrumental rationality).
A happy, stable productive you (or the previous stable version of you), is a necessary condition of using "more rationality". If it comes out otherwise, then it's not rationality. It's some other confused phenomenon. Like a crisis of self-consistency. Which if it happens, and feels understandably painful, should eventually produce a better you at the end. If it doesn't, then it actually wasn't worth starting on the entire adventure, or stressing much about it.
Just to make sure I am not miscommunicating, "a little rationality can actually be worse for you" is totally a real phenomenon. I wouldn't deny it.
I definitely agree that the goal should be to be emotionally healthy while accepting reality as it is, but my point really is that the two goals may not always come together.
I suspect that truths that could cause bad mental health/instability probably have the following properties:
Non-local belief changes must be made. That is, you can't compartmentalize the changes to a specific area.
Extreme implications, that is it implies much higher implications than your previous beliefs.
Contradicts what you deeply believe or value.
These are the properties I expect to cause mental health problems for truths.