This post is roughly analogous to the “before” photos that you see people use to demonstrate the effects of dieting, except I want to use it a sort of time capsule to compare my conception of myself as a thinker now to that conception in [some] years time, when I am further along in my studies of rationality and metarationality.

To contextualise the picture of a flabby, sad looking brain that I take today, I should probably describe my exploration of those concepts so far.

Rationality first: I study in a field that is somewhat based on principles of rationality but in terms of day to day thinking relies much more on pattern recognition and intuition based on experience than genuine reasoning from first principles. I’d like to start integrating Bayesian thinking into my study but I haven’t yet found a really good layman’s introduction.

In my own time, I’m involved in effective altruism, a field which highly prizes rational thinking, and as part of that I attend a weekly discussion group which explores, amongst other things, the philosophical basis of altruism and reason. In the last few months, I’ve also got really into Scott Alexander; his ability to imagine alternatives to systems and norms that are so widely accepted we don’t even realise we’re using them is incredibly impressive, and he’s also convinced me of the utility of using extreme hypothetical scenarios to help us solve problems in the actual real world. More and more of my sentences these days start with “I can/can’t imagine a world in which…”, which is kind of a fun way to think about problems or values.

In terms of metarationality, I don’t think I’ve really even scratched the most superficial surface, but I also think I’ve progressed significantly since this time a few years ago. I had to study some very basic epistemology at school but I really didn’t enjoy it- I couldn’t really see how mulling over whether it was possible to know anything made any difference to anything I could practically achieve or change. I think it did affect me though, as compared to my peers I think I am somewhat more likely to ask how we know the things we’re taught.

It’s kind of weird that we can think and know without knowing how. I don’t think consciously think about walking anymore, but I guess at some point I did; thinking feels like the opposite, where I’ve just always been doing it and only now started to think about how. I get the sense that, as with walking, thinking about it too much increases the likelihood of tripping up, but I hope it will also take me in some interesting new directions.

From a cognitive point of view, I guess I see myself as basically a kind of information-processing machine, taking sensory input from the world and using it to create new and adapt existing models about the world, which I can then use to make predictions and instruct my behaviour. One of the valuable things we explored in this epistemology class was the idea that we all think and know in models, using differently detailed models at different times. Even this incredibly basic idea, in my experience, is under-utilised in science teaching- we get taught in a way that suggests “this is the way things are”, rather than “this is one way of thinking about the world which has a) a historical and social context and b) obvious restrictions or areas of deviation from observed experience”.

Given that lots of the discrepancies between our models and the real, observed world probably come from applying inappropriate models rather than wrong ones, this seems like a worthwhile idea. A clearer explanation of this might be that a 1:250,000 map is a correct representation of a certain area of land, but it might not be the right one to use when navigating in a snowstorm- in this case it’s not that the model itself is bad, wrong, or internally inconsistent, it’s just that it’s not being correctly applied.

The machine which makes and uses these models needs to be maintained, with things like sleep, food, exercise, and (especially) other people. It’s quite an unreliable machine; it’s processing power and ability to make good predictions about the world (and good decisions about it’s own maintenance) is highly variable. which is aggravating; on the other hand, it has a fairly un-machinelike ability to fix and in some ways improve itself.

To this end, I have a rough plan for my continued exploration of metarationality, and in particular the weaknesses and limitations of the highly positivist mechanisms that my field and culture rely on, to the point that we don’t even know we’re doing it.

I want to continue to explore the LessWrong sequences and the Slatestarcodex archive, and eventually meaningness.com; but I also want to learn about these ideas from a more traditional perspective as well, like through some university-level philosophy course; a continuing theme of my metacognitive growth in the past few years has been finding that basically none of my ideas are original and that, in almost all cases, some bearded guy in Ancient Greece has already completely destroyed my position. I think a weakness of the “New Rationalist” community, as embodied by places like LessWrong, is probably a lack of traditional, structured education that means they end up re-inventing the wheel or, less seriously, using their own idiosyncratic terms for ideas that are more widely known by something else, limiting the ability of outsiders to debate and critique their ideas and creating a nearly incesteuous culture of in-jokes and knowing nods (I'm not saying that this is how things are, or indeed that it would be a bad thing- cliques, in-jokes and incredibly niche references are all incredibly fun- but such a setup wouldn’t lend itself particularly well to actual intellectual progress).

By approaching the problem of what, how, and why we think from both traditional and more autodidactic angles (using my actually-not-that-useful superpower of being able to read really fast), I’m hoping to give myself a more broad and stable knowledge base to work with, and also the ability to sound impressive to people who aren’t members of tiny internet communities.

The other risk is of course completely losing my own grip on reality and sanity; different people (or machines) seem to have different tolerances of uncertainty, and I know that historically mine is relatively very low. Losing my ability to feel certainty about the most foundational aspects of my existence, and even about certainty itself, might have a highly deleterious impact on my emotional and psychological wellbeing- I’m sure that, like me, lots of people have experienced reading things that literally make their head spin and forced them to step away from the screen for a few minutes just to feel stable again. Mitigating this risk is largely by that stepping-away policy, engaging with other people and in activities like exercise that don’t really care what your epistemic confidence in them is- running really far hurts whether you think you exist or not. I’ve also found the “Replacing Guilt” series on mindingourway.com to be grounding, calming and encouraging in a way that no other “self-help”-y resource has really managed before, so I’d encourage you to check that out.

Of course, there’s a non-zero probability that this is the last thing I ever write on the matter and none of this is ever worth anything, but I’ve enjoyed writing it so that’s probably enough in and of itself.

NB this piece was crossposted from my personal blog, which can be found at jospus.com.

New Comment
3 comments, sorted by Click to highlight new comments since:

"This post is roughly analogous to the “before” photos that you see people use to demonstrate the effects of dieting, except I want to use it a sort of time capsule to compare my conception of myself as a thinker now to that conception in [some] years time, when I am further along in my studies of rationality and metarationality."

I like this, and I think it's a helpful exercise to do! It seems like you've been around the community and exposed to the ideas therein for at least some amount of time prior to this post, if you can recall a snapshot of yourself from the time before experiencing this community and its ideas, that could be a good comparison or reference point to explore as well.

What explicit (specific) methodology(ies) will you use to assess and compare yourself across snapshots? And how will said methodology(ies) compare to or build off of extant ones produced by other Rationalists, philosophers, people you think have good judgement, etc.?

I haven't written a snapshot like you did / what you're proposing, but whenever I reflect on my current state of being and conceptualise what "level" I'm at with different skills, I have a really hard time coming up with any precise or quantitative comparative statements between my state of being at one time versus another...it's all very fuzzy and qualitative and not very concrete. Thus, the above two questions, and I'd be interested to hear what you and others think about how one might build methodologies for comparing oneself's "level" at a certain state of being compared to another at a different time. I have the same issue when I reflect on my university experience and how that changed me, I know it changed my thoroughly in significant ways, but I couldn't really tell you exactly how or why. Ditto with Rationality and engaging with this community and its ideas since 2015 / 2016.

I know fuzzily that I'm "more better" at quite a few different things such as writing, my analytical skills have improved, I have better control over my emotions and mental states resulting in a greater capability to regulate myself generally, I have better social skills, more friends, a stronger sense of self (what I like, dislike, strive for, have to protect, etc.) despite keeping my identity small all this time, keener intellectual ability, and more. I'm even better at coping with long standing struggles that have challenged me often and significantly throughout my life such as ADHD and chronic depression.

Huh. Maybe some of the things in the above paragraph can be used as comparative reference points. Probably! (had this thought right after writing that paragraph and wanted to leave it and this sentence in to show some "thinkery" in progress; that feels important for some reason)

"One of the valuable things we explored in this epistemology class was the idea that we all think and know in models, using differently detailed models at different times."

Map versus territory (i.e. differing models and other such things) is a great concept and is explored further here on LessWrong, here's the page with all such tagged posts: https://www.lesswrong.com/tag/map-and-territory

"The other risk is of course completely losing my own grip on reality and sanity; different people (or machines) seem to have different tolerances of uncertainty, and I know that historically mine is relatively very low. Losing my ability to feel certainty about the most foundational aspects of my existence, and even about certainty itself, might have a highly deleterious impact on my emotional and psychological wellbeing- I’m sure that, like me, lots of people have experienced reading things that literally make their head spin and forced them to step away from the screen for a few minutes just to feel stable again."

Undergoing major shifts in identity, philosophical thought, spiritual thought, and/or other significant things humans think about can lead quickly to that foundationless feeling of terrible uncertainty. Such an experience is something that many individuals have gone through and survived, but I'm glad you've explicitly pointed it out as something you're worried about, because you do want to be cognizant of that kind of experience occurring while rewriting your own self's software :) In spiritual circles such an experience tends to be called a "Dark night of the soul", but I'm certain what the more secular term is, though I have seen the concept / experience discussed here on LessWrong several times, maybe another individual can provide links to those or other helpful posts on the topic.

Other than reality itself and the laws that govern it, I don't think there is actually a "true" foundation for any thoughts, intellectual edifices, paradigms, etc. I've been operating under this belief and the belief that there is no meaning to anything (inherently, for the universe does not care) for a few years now and what's helped me is thinking about how the world might or might not look if some ideas are true or not, reflecting on what Wittgenstein's "Language Games" (https://en.wikipedia.org/wiki/Language_game_(philosophy)) mean / entail / imply, and that if I'm floundering in the dark, I can choose an extant intellectual / ontological foundation to use as a life raft temporarily (though none that aren't plausible). Also, the higher than reasonable certainty that tomorrow morning, the sun will have risen, people will go about their days, life will continue for another 24 hours, and so on. I find that reassuring and helpful. Sometimes it's better to not think too much about some particular thing, for at least some of the time, and just "be" instead.

"Of course, there’s a non-zero probability that this is the last thing I ever write on the matter and none of this is ever worth anything, but I’ve enjoyed writing it so that’s probably enough in and of itself."

You enjoyed writing it and have also described what seems like a good exercise for people to do, given either of those things independently or both together I believe that means that your post was worth something rather than nothing, yes? I think so :) After reading your post I now want to do the same exercise and see what I find / come up with.

Happy writing! Cheers, Willa

Thanks for your kind, encouraging, and thought-provoking comment Willa :)

Definitely the ideal would have been to write this earlier on- the post itself has been on my list to write for a long time, which probably didn't help. I like the idea of having some kind of objective comparison method- the ideal would be some kind of Rationality score, but I don't know if that kind of thing does/could ever exist, or would even reflect the breadth of change one is likely to experience enough to make it even vaguely useful.

I think "dark night of the soul" works pretty well as a descriptor of the experience we're talking about, although to me it conjures some images of being either guilty or having to make a difficult choice, rather than necessarily specifically having your cognitive foundations shaken. Whatever we call it, I would- as you suggested- be interested to hear how other's dealt with it, and how they managed to fulfil their responsibilities in other areas of life when at times everything else can suddenly seem quite unimportant.

Your ideas for avoiding our Dark Night sound reasonable and it comforts me that you seem to be a lot further on in your "thought journey" and still find solace somewhere; I guess my worry would be that lifeboats might not be enough for me to retain enough functioning for the rest of my life, and I would like a reasonably solid intellectual terra firma to act from. I think your idea about self-talking with very concrete, predictable things is likely to help, and I have heard of such things being used for anxiety attacks and the like.

I'm interested to hear what your "snapshot" looks like- even if you're not at the beginning of your journey, it's probably worth doing both for yourself and for other travellers. And thanks again for your encouraging and thoughtful reply- lots to consider!

Cheers :)

I think we may not ever reach some sort of objective comparison method, but, I suspect a list of achievements, habits / actions that become habitual for someone, and so on could be created that would map decently to some notion of what Rationalists Self-Improvement ought to look like. For example, with few exceptions, a person who does not exercise regularly nor eat a decent diet but manages to change to where they regularly exercised and ate a decent diet would be a great example of Rationalist Self-Improvement (to be a Rationalist and not do those things is definitely a failure mode, and I'm still trying to overcome that failure mode myself). I think mental habits, modes of thinking, identity things, and so on related to what someone ought to be like / be capable of as a "Rationalists" would be waaaaaay more difficult to add to such a list without being too alienating or too arbitrary, but I suspect there are things, that could be added.

Why does "dark night of the soul" conjure images of being either guilty or having to make a difficult choice in your mind? Also, I believe the term applies to even more situations than just cognitive foundations being kicked out from under oneself, it is very applicable to similar situations occurring around identity, the self, and more, I think.

I've been told that Daoism is a particularly good tradition to look into for living foundationlessly. I don't know much about it, but I've bought a few books and will begin learning more about it. Thus far what's helped me the most, intellectually anyways, are probably postmoderism and continental philosophy more generally. All those esoteric French philosophers, basically. I've done pretty okay with functioning while not "having a center" or while not having a foundation for self nor identity nor ideas. Those philosophers and ideas have been helpful, though the praxis of meditation and mindfulness has probably been more helpful.

I hope your last month has gone well! How have your considerations gone?

I've decided that I do want to do a snapshot, however, my plan is to form a group to do the Hammertime sequences. Upon forming a group and prior to doing those sequences is when I'll write a snapshot of my present self (with what I can remember of how I was at times previous) so that after doing Hammertime I can write a new snapshot then and find out how much doing the Hammertime sequences was or wasn't beneficial and in what directions if any. Want to join that group?