There’s this trap people fall into when writing, especially for a place like LessWrong where the bar for epistemic rigor is pretty high. They have a good idea, or an interesting belief, or a cool model. They write it out, but they’re not really sure if it’s true. So they go looking for evidence (not necessarily confirmation bias, just checking the evidence in either direction) and soon end up down a research rabbit hole. Eventually, they give up and never actually publish the piece.
This post is about how to avoid that, without sacrificing good epistemics.
There’s one trick, and it’s simple: stop trying to justify your beliefs. Don’t go looking for citations to back your claim. Instead, think about why you currently believe this thing, and try to accurately describe what led you to believe it.
I claim that this promotes better epistemics overall than always researching everything in depth.
Why?
It’s About The Process, Not The Conclusion
Suppose I have a box, and I want to guess whether there’s a cat in it. I do some tests - maybe shake the box and see if it meows, or look for air holes. I write down my observations and models, record my thinking, and on the bottom line of the paper I write “there is a cat in this box”.
Now, it could be that my reasoning was completely flawed, but I happen to get lucky and there is in fact a cat in the box. That’s not really what I’m aiming for; luck isn’t reproducible. I want my process to robustly produce correct predictions. So when I write up a LessWrong post predicting that there is a cat in the box, I don’t just want to give my bottom-line conclusion with some strong-sounding argument. As much as possible, I want to show the actual process by which I reached that conclusion. If my process is good, this will better enable others to copy the best parts of it. If my process is bad, I can get feedback on it directly.
Correctly Conveying Uncertainty
Another angle: describing my own process is a particularly good way to accurately communicate my actual uncertainty.
An example: a few years back, I wondered if there were limiting factors on the expansion of premodern empires. I looked up the peak size of various empires, and found that the big ones mostly peaked at around the same size: ~60-80M people. Then, I wondered when the US had hit that size, and if anything remarkable had happened then which might suggest why earlier empires broke down. Turns out, the US crossed the 60M threshold in the 1890 census. If you know a little bit about the history of computers, that may ring a bell: when the time came for the 1890 census, it was estimated that tabulating the data would be so much work that it wouldn’t even be done before the next census in 1900. It had to be automated. That sure does suggest a potential limiting factor for premodern empires: managing more than ~60-80M people runs into computational constraints.
Now, let’s zoom out. How much confidence should I put in this theory? Obviously not very much - we apparently have enough evidence to distinguish the hypothesis from entropy, but not much more.
On the other hand… what if I had started with the hypothesis that computational constraints limited premodern empires? What if, before looking at the data, I had hypothesized that modern nations had to start automating bureaucratic functions precisely when they hit the same size at which premodern nations collapsed? Then this data would be quite an impressive piece of confirmation! It’s a pretty specific prediction, and the data fits it surprisingly well. But this only works if I already had enough evidence to put forward the hypothesis, before seeing the data.
Point is: the amount of uncertainty I should assign depends on the details of my process. It depends on the path by which I reached the conclusion.
This carries over to my writing: if I want to accurately convey my uncertainty, then I need to accurately convey my process. Those details are relevant to how much certainty my readers should put in the conclusion.
So Should I Stop Researching My Claims?
No. Obviously researching claims still has lots of value. But you should not let uncertainty stop you from writing things up and sharing them. Just try to accurately convey your uncertainty, by communicating the process.
Bad Habits
It’s been pointed out before that most high-schools teach a writing style in which the main goal is persuasion or debate. Arguing only one side of a case is encouraged. It’s an absolutely terrible habit, and breaking it is a major step on the road to writing the sort of things we want on LessWrong.
There’s a closely related sub-habit in which people try to only claim things with very high certainty. This makes sense in a persuasion/debate frame - any potential loophole could be exploited by “the other side”. Arguments are soldiers; we must show no weakness.
Good epistemic habits include living with uncertainty. Good epistemic discourse includes making uncertain statements, and accurately conveying our uncertainty in them. Trying to always research things to high confidence, and never sharing anything without high confidence, is a bad habit.
Takeaway
So you have some ideas which might make cool LessWrong posts, or something similar, but you’re not really confident enough that they’re right to put them out there. My advice is: don’t try to persuade people that the idea is true/good. Persuasion is a bad habit from high school. Instead, try to accurately describe where the idea came from, the path which led you to think it’s true/plausible/worth a look. In the process, you’ll probably convey your own actual level of uncertainty, which is exactly the right thing to do.
… and of course don’t stop researching interesting claims. Just don’t let that be a bottleneck to sharing your ideas.
Addendum: I'm worried that people will read this post think "ah, so that's the magic bullet for a LW post", then try it, and be heartbroken when their post gets like one upvote. Accurately conveying one's thought process and uncertainty is not a sufficient condition for a great post; clear explanation and novelty and interesting ideas all still matter (though you certainly don't need all of those in every post). Especially clear explanation - if you find something interesting, and can clearly explain why you find it interesting, then (at least some) other people will probably find it interesting too.
This is a very important point, and I’m happy someone made it and it’s been upvoted so quickly. I do have a million ideas for LW posts that I hesitate to contribute for many of the reasons above.