I'm writing a book about epistemology. It's about The Problem of the Criterion, why it's important, and what it has to tell us about how we approach knowing the truth.
I've also written a lot about AI safety. Some of the more interesting stuff can be found at the site of my currently-dormant AI safety org, PAISRI.
To close off this ramble, I'm a bit disheartened. My fun easy writing was more popular than my attempts at substantive intellectual progress contribution. It's okay that it flopped, and it makes sense that kind of writing's harder, it's just a disappointing update about the writing incentive landscape.
This has been my experience also. Only rarely do things I put significant effort into perform well in terms of votes. It's often the quick, casually written pieces that I barely edited that people love.
I think it's in part due to selection effects. The things I can write quickly in an hour or two and be happy to publish are the things I can explain easily because they are easily accessible and understood by everyone. The things I have to write slowly are more complicated topics by nature and thus interest fewer people and it's less likely that my presentation will connect with a large audience.
(The would-be counter example is that my mostly highly upvoted post of all time took a lot of effort to produce, but it was about an extremely accessible topic: the death of my mother.)
What I can tell you, though, is that having a catalog of deep posts that I put a lot of effort into builds on itself over time. Those posts don't connect with everyone, but then I'll meet someone and find out that what I wrote was life-changing for them.
It's hard for me to know if the effort I put into them was worth it, in that was there a version of that post I wrote less well that connected equally well. I'm not sure, it's a bit hard to test, especially since often I spend a long time writing a post to get clear on the ideas myself, not just to massage the language. But I can say that, of those posts I labored over, they are among the most impactful, even if they were not popular on initial publication.
- This is a category argument that I explicitly avoid making and don't think is meaningful: the word itself does not mean anything and arguing over it is meaningless. You seem to really want to do that anyway because you can support your argument better on basically definitional grounds than factual ones.
This is an extremely weird position to take given the central claim of this entire post seems to be that rationality fits the definition of a religion. What do you think you're arguing for in this post, then, if not that?
by choosing Islam, Judaism etc as comparison points instead of Scientology, Mormonism or any other smaller, newer or more modern movement is just assuming your conclusion.
I don't know what point you think you're making here but these are also obviously religions.
How many non-religions have had people write thousand-word posts denying they are religions?
Many, actually. I've read things arguing variously that:
You know what I've read zero words about? Arguments that:
This, to my reading, is evidence in favor of things people are are/aren't religions not being religions, or at least not fitting within the category of religion as traditionally understood. Your meme doesn't really prove anything. It just tries to assert that no, actually, the existence of arguments that something isn't a religion is evidence in favor of it being a religion, but fails to do any work to establish such a claim, and seems contrary to the evidence I listed above.
Now it would be much better to argue on facts, but unfortunately it's notoriously hard to define the category of religion accurately, so I'm not sure how we can really do that. Assuming you haven't cracked the central question of religious studies (a question about which there is only limited consensus), it's going to be quite hard to look at the features of rationality and say whether or not it's in the category of religion. And that means we're left to see if it looks like a central example of religion, which it clearly doesn't since it's a matter of contention, and so perhaps the only interesting question is not if it's a religion but what features of religions does it share (which you do get in to, to your credit, but that's different from arguing that rationality is a religion).
Pretty much any other major feature of religion you can name is present in rationalism. Rationalism's resemblance to traditional religion is so extreme that even if rationalism is not, technically, a religion, this seems like it is a pedantic distinction. It certainly has very distinct beliefs and rituals of its own, and only narrowly misses those points of comparison on what seem to be technicalities.
You have come to the wrong place if you want to make an argument that's not going to be refuted by pedantic distinctions!
But I find your arguments unconvincing on the whole, no pedantism required. My reading is that you've successfully argued that rationality is a movement, not too different from many other movements like veganism or communism. The content is different, but the structures are similar. To the extent movements look like religion, it's because both are for humans, and both work or don't work based on successfully coordinating humans to take particular actions.
As I see it, that you have to argue that it's a religion is a bad sign. It means that rationality isn't passing the smell test. That is, you needed to write a post to argue that rationality is a religion, which I view as evidence against it being a religion, since most religions are clearly religions and no one writes posts arguing that Islam or Hinduism is a religion (if anything, people sometimes write the opposite for various reasons!). If it's in the category of religions, it's a marginal case at best.
(And just to lay my cards on the table, I say this as a religious rationalist, in that I'm religious and also part of the rationalist movement. In fact, I've put some effort into trying to convince rationalists to be religious because I think it would be good for them!)
I'm lucky to live in a city (San Francisco) with multiple art museums. I have memberships. I enjoying going back over and over because I discover something new every time. Sometimes it's something in beloved, favorite piece I never noticed before. Other times something catches my eye about a piece I had previously ignored, not even noticing it was there. Sometimes I go and spend an hour in a single room, really drinking in the pieces there. The depth of art is not necessarily endless, but it's deep enough that I doubt I will ever explore it all.
Do you ever find yourself slipping into a state (possible gradually) of over-reliance on the LLM?
I don't really buy the premise of over-reliance. I can only rely on a technology too much with respect to some goal. Like for example if I need to be able to survive in the wilderness, then using a microwave causes my skill at starting fires to dwindle if I'm no longer practiced at starting fires.
I instead think in terms of what I'm trying to accomplish. If all I want is a picture, not to learn how to draw, and the alternative was no picture, then I'm pretty happy to get a diffusion model to draw me a picture. Similarly, when writing, if the choice is fail to finish a blog post because I'm stuck or get unstuck with Claude's help, I choose Claude, even if that means I get marginally worse at solving writer's block on my own. As long as I have Claude around, this isn't an issue, and if I lose it, I can go back to doing things the hard way.
For another example, I've been doing ML research lately, in that I'm trying to train new models from scratch. I don't have much background in ML, but I am a professional programmer and have some adjacent experience with data analytics. So I'm using Claude Code to vibe code a lot of this stuff for me, but I'm always setting the direction. As a result, I'm not really learning PyTorch, but then if I had to learn PyTorch I probably wouldn't even bother to invest the time to train new models because it would be more than I had time to take on. So instead I vibe code to see if I can get results, and then learn as I go, filling in my knowledge by looking at the code, fixing problems, and trying to understand how it all works after it's already working.
As Claude and other LLM become (expectedly) more capable, do you have any preliminary thoughts for how this workflow style may be updated?
The big change would be if Claude got better at writing in my style. The most painful and time consuming part is still editing. Claude can only kind of approximate good writing, much less good writing of the type I would personally produce. It would cause a big speed up if I could just dump thoughts and have Claude sort them out and turn them into easily readable text. Even if I could just write in the maximally Gordon way that's not fit for other people to read (which is a little bit of what you're getting in this comment; it's stream-of-thought and not polished for concision or readability) and have that be turned into nice text would be a big deal.
I admit I didn't read the whole thing, but I'm pretty sure from skimming that you're simply describing memory reconsolidation in Jungian language.
ETA (10:51): So, uh, not to be too pointed here, but why not just say you're doing memory reconsoliation? Why invent a new word?
Yes, exactly. For what it's worth, what you're getting at in this post is roughly why I wrote Fundamental Uncertainty (or am still writing it, as the final version is still under revision), where I try to argue that epistemic uncertainty matters a lot and is pervasive and unavoidable, and therefore causes problems when you try to build AI that's aligned. In the book I don't spend much time on AI, but I wrote the book because when I was working on AI alignment I saw how much this issue mattered so I set out to convince others it's important. My hope is once the book is published to have time to focus more on the AI side of things, able to use the book as a useful referent for loading up the worldview where uncertainty is foundational (which seems surprisingly hard to do for a bunch of reasons).
I forget what's in the deliberate grieving post, but based on what you say here, I'll note that what I have in mind is largely about identity, not plans. As in, the root of emotional processing is attachment not to an idea about plans but an idea about the self. When one thinks "this is a great plan" the second thought is often "and I'm a great person for coming up with such a great plan". If the plan isn't great, then the person might not be either, and that's way more painful than the plan not being great.
Based on a lot of observations, I see rationalists sometimes manage to get around this because they are far enough on the autism spectrum to just not form strong a strong sense of identity. More often, though, they LARP at not having a strong sense of identity, and actually have to first get in touch with who they are (as supposed to who they wish they were) to begin to develop the skills to do actual emotional processing instead of bypassing it (and suffering all the usual consequences of suppressing a part of one's being).
Oh, and I should also note, since you mention incentives, that I largely see my job as a writer to ignore what my audience wants. This is dangerous advice if taken too far, but what I mean is that people will reward me for writing slop, and I have to decide, am I here to write slop, or am I here to write something else even if people don't like it as much. I choose to optimize for something other than upvotes, though I do care if people can make sense of what I'm saying and it's worthwhile to them that I said it.