I liked Duncan Sabien's Basics of Rationalist Discourse, but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).
Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.
The basics of rationalist discourse, as I understand them:
1. Truth-Seeking. Try to contribute to a social environment that encourages belief accuracy and good epistemic processes. Try not to “win” arguments using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments like soldiers at all.
2. Non-Violence: Argument gets counter-argument. Argument does not get bullet. Argument does not get doxxing, death threats, or coercion.[1]
3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models. Where possible, avoid saying stuff that you expect to lower the net belief accuracy of the average reader; or failing that, at least flag that you're worried about this happening.
As a corollary:
3.1. Meta-Honesty. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors about your honesty".
4. Localizability. Give people a social affordance to decouple / evaluate the local validity of claims. Decoupling is not required, and indeed context is often important and extremely worth talking about! But it should almost always be OK to locally address a specific point or subpoint, without necessarily weighing in on the larger context or suggesting you’ll engage further.
5. Alternative-Minding. Consider alternative hypotheses, and ask yourself what Bayesian evidence you have that you're not in those alternative worlds. This mostly involves asking what models retrodict.
Cultivate the skills of original seeing and of seeing from new vantage points.
As a special case, try to understand and evaluate the alternative hypotheses that other people are advocating. Paraphrase stuff back to people to see if you understood, and see if they think you pass their Ideological Turing Test on the relevant ideas.
Be a fair bit more willing to consider nonstandard beliefs, frames/lenses, and methodologies, compared to (e.g.) the average academic. Keep in mind that inferential gaps can be large, most life-experience is hard to transmit in a small number of words (or in words at all), and converging on the truth can require a long process of cultivating the right mental motions, doing exercises, gathering and interpreting new data, etc.
Make it a habit to explicitly distinguish "what this person literally said" from "what I think this person means". Make it a habit to explicitly distinguish "what I think this person means" from "what I infer about this person as a result".
6. Reality-Minding. Keep your eye on the ball, hug the query, and don’t lose sight of object-level reality.
Make it a habit to flag when you notice ways to test an assertion. Make it a habit to actually test claims, when the value-of-information is high enough.
Reward scholarship, inquiry, betting, pre-registered predictions, and sticking your neck out, especially where this is time-consuming, effortful, or socially risky.
7. Reducibility. Err on the side of using simple, concrete, literal, and precise language. Make it a habit to taboo your words, do reductionism, explain what you mean, define your terms, etc.
As a corollary, applying precision and naturalism your own cognition:
7.1. Probabilism. Try to quantify your uncertainty to some degree.
8. Purpose-Minding. Try not to lose purpose (unless you're deliberately creating a sandbox for a more free-form and undirected stream of consciousness, based on some meta-purpose or impulse or hunch you want to follow).
Ask yourself why you're having a conversation, and whether you want to do something differently. Ask others what their goals are. Keep the Void in view.
As a corollary:
8.1. Cruxiness. Insofar as you have a sense of what the topic/goal of the conversation is, focus on cruxes, or (if your goal shifts) consider explicitly flagging that you're tangenting or switching to a new conversational topic/goal.[2]
9. Goodwill. Reward others' good epistemic conduct (e.g., updating) more than most people naturally do. Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility, unless someone has explicitly set aside a weirder or more rough-and-tumble space.[3]
10. Experience-Owning. Err on the side of explicitly owning your experiences, mental states, beliefs, and impressions. Flag your inferences as inferences, and beware the Mind Projection Fallacy and Typical Mind Fallacy.
As a corollary:
10.1. Valence-Owning. Err on the side of explicitly owning your shoulds and desires. Err on the side of stating your wants and beliefs (and why you want or believe them) instead of (or in addition to) saying what you think people ought to do.
Try to phrase things in ways that make space for disagreement, and try to avoid socially pressuring people into doing things. Instead, as a strong default, approach people with an attitude of informing and empowering them to do what they want.
Favor language with fewer and milder connotations, and make your arguments explicitly where possible, rather than relying excessively on the connotations, feel, fnords, or vibes of your words.
A longer, less jargony version of this post is available on the EA Forum.
- ^
Counter-arguments aren't the only OK response to an argument. You can choose not to reply. You can even ban someone because they keep making off-topic arguments, as long as you do this in a non-deceptive way. But some responses to arguments are explicitly off the table.
- ^
Note that "the topic/goal of the conversation" is an abstraction. "Goals" don't exist in a vacuum. You have goals (though these may not be perfectly stable, coherent, etc.), and other individuals have goals too. Conversations can be mutually beneficial when some of my goals are the same as some of yours, or when we have disjoint goals but some actions are useful for my goals as well as yours.
Be wary of abstractions and unargued premises in this very list! Try to taboo these prescriptions and claims, paraphrase them back, figure out why I might be saying all this stuff, and explicitly ask yourself whether these norms serve your goals too.
Part of why I've phrased this list as a bunch of noun phrases ("purpose-minding", etc.) rather than verb phrases ("mind your purpose", etc.) is that I suspect conversations will go better (on the dimension of goodwill and cheer) if people make a habit of saying "hm, I think you violated the principle of experience-owning there" or "hm, your comment isn't doing the experience-owning thing as much as I'd have liked", as opposed to "own your experience!!".
But another part of why I used nouns is that commands aren't experience-owning, and can make it harder for people to mind their purposes. I do have imperatives in the post (mostly because the prose flowed better that way), but I want to encourage people to engage with the ideas and consider whether they make sense, rather than just blindly obey them. So I want people to come into this post engaging with these first as ideas to consider, rather than as commands to obey.
- ^
Note that this doesn't require assuming everyone you talk to is honest or has good intentions.
It does have some overlap with the rule of thumb "as a very strong but defeasible default, carry on object-level discourse as if you were role-playing being on the same side as the people who disagree with you".
I think considering brevity, for its own sake, to be an important rationalist virtue is unlikely to prove beneficial for maintaining, or raising, the quality of rationalist discourse. That's because it is a poorly defined goal that could easily be misinterpreted as encouraging undesirable tradeoffs at the expense of, for example, clarity of communication, laying out of examples to aid in understanding of a point, or making explicit potentially dry details such as the epistemic status of a belief, or the cruxes upon which a position hinges.
There is truth to the points you've brought up though, and thinking about about how brevity could be incorporated into a list of rationalist virtues has brought two ideas to mind:
1. It seems to me that this could be considered an aspect of purpose-minding. If you know your purpose, and keep clearly in mind why you're having a conversation, then an appropriate level of brevity should be the natural result. The costs of brevity, or lack thereof, can be payed as needed according to what best fits your purpose. A good example of this is this post here on lesswrong, and the longer, but less jargony, version of it that exists on the EA forum.
2. The idea of epistemic legibility feels like it includes the importance of brevity while also making the tradeoffs that brevity, or lack thereof, involves more explicit than directly stating brevity as a rationalist virtue. For example a shorter piece of writing that cites fewer sources is more likely to be read in full rather than skimmed, and more likely to have its sources checked rather than having readers simply hope that they provide the support that the author claims. This is in contrast to a longer piece of writing that cites more sources which allows an author to more thoroughly explain their position, or demonstrate greater support for claims that they make. No matter how long or short a piece of writing is, there are always benefits and costs to be considered.
While writing this out I noticed that there was a specific point you made that did not sit well with me, and which both of the ideas above address.
To me this feels like focusing on the theoretical ideal of brevity at the expense of the practical reality of brevity. All other things are never equal, and I believe the preference should be for having precisely as many words as necessary, for whatever specific purpose and context a piece of writing is intended for.
I realize that "we'd prefer this number to be as small as possible" could be interpreted as equivalent to "the preference should be for having precisely as many words as necessary", but the difference in implications between these phrases, and the difference in their potential for unfortunate interpretations, does not seem at all trivial to me.
As an example, something that I've seen discussed both on here, and on the EA forum, is the struggle to get new writers to participate in posting and commenting. This is a struggle that I feel very keenly as I started reading lesswrong many years ago, but have (to my own great misfortune) avoided posting and commenting for various reasons. If I think about a hypothetical new poster who wants to embody the ideals and virtues of rationalist discourse, asking them to have their writing use as small a number of words as possible feels like a relatively intimidating request when compared to asking that they consider the purpose and context of their writing and try to find an appropriate length with that in mind. The latter framing also feels much more conducive to experimenting, failing, and learning to do better.