If you don't just learn what someone's opinion is, but also how they arrived at it and how confidently they hold it, that can be much stronger evidence that they're stupid and bad. Arguably over half the arguments one encounters in the wild could never be made in good faith.
One note is that the framing here looks potentially useful to address internalized shame over being wrong, but not anxiety over being socially punished. (wherein none of the math applies, only what other people think the math is, or whatever process they use to allocate beliefs about wrong-bad-ness)
This post relies on several assumptions that I believe are false:
1. The rationalist community has managed to avoid bringing in any outside cultural baggage so when someone admits they were wrong about something important (and not making a strategic disclosure) people will only raise their estimate of incompetence by a Bayesian 0.42%.
2. The base rate of being "stupid and bad" by rationalist standards is 5% or lower (The sample has been selected for being better than average, but the implicit standards are much higher)
3. When people say they are worried about being "wrong" and therefore "stupid" and "bad", they are referring to things with standard definitions that are precise enough to do math with.
4. The individuals you're attempting to reassure with this post get enough of a spotlight that their 1 instance of publicly being wrong is balanced by a *salient* memory of the 9 other times they were right.
5. Not being seen as "stupid and bad" in this community is sufficient for someone to get the things they want/avoid the things they don't want.
6. In situations where judgements must be made with limited information (e.g. job interviews) using a small sample of data is worse than defaulting to base rates. (Thought experiment: you're at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?)
The rationalist community [...] rationalist standards [...] in this community
Uh, remind me why I'm supposed to care what some Bay Area robot cult thinks? (Although I heard there was an offshoot in Manchester that might be performing better!) The square quotes around "rationalist" "community" in the second paragraph are there for a reason.
The OP is a very narrowly focused post, trying to establish a single point (Being Wrong Doesn't Mean You're Stupid and Bad, Probably) by appealing to probability theory as normative reasoning (and some plausible assumptions). If you're worried about someone thinking you're stupid and bad because you were wrong, you should just show them this post, and if they care about probability theory as normative reasoning, then they'll realize that they were wrong and stop mistakenly thinking that you're stupid and bad. On the other hand, if the person you're trying to impress doesn't care about probability theory as normative reasoning, then they're stupid and bad, and you shouldn't care about impressing them.
outside cultural baggage
Was there ever an "inside", really? I thought there was. I think I was wrong.
people will only raise their estimate of incompetence by a Bayesian 0.42%.
But that's the correct update! People who update more or less than the Bayesian 0.42% are wrong! (Although that doesn't mean they're stupid or bad, obviously.)
they are referring to things with standard definitions that are precise enough to do math with.
This is an isolated demand for rigor and I'm not going to fall for it. I shouldn't need to have a reduction of what brain computations correspond to people's concept of "stupid and bad" in order to write a post like this.
using a small sample of data is worse than defaulting to base rates
What does this mean? If you have a small sample of data and you update on it the correct amount, you don't do worse than you would have without the data.
you're at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?
Analyzing the signaling game governing how people choose to dress at tech conferences does look like a fun game-theory exercise; thanks for the suggestion! I don't have time for that now, though.
3. When people say they are worried about being "wrong" and therefore "stupid" and "bad", they are referring to things with standard definitions that are precise enough to do math with.
I'd highlight the likelihood of conflicting definitions, precision or no precision.
Sometimes, people are reluctant to admit that they were wrong about something, because they're afraid that "You are wrong about this" carries inextricable connotations of "You are stupid and bad." But this behavior is, itself, wrong, for at least two reasons.
First, because it's evidential decision theory. The so-called "rationalist" "community" has a lot of cached clichés about this! A blank map does not correspond to a blank territory. What's true is already so; owning up to it doesn't make it worse. Refusing to go to the doctor (thereby avoiding encountering evidence that you're sick) doesn't keep you healthy.
If being wrong means that you're stupid and bad, then preventing yourself from knowing that you were wrong doesn't stop you from being stupid and bad in reality. It just prevents you from knowing that you're stupid and bad—which is an important fact to know (if it's true), because if you don't know that you're stupid and bad, then it probably won't occur to you to even look for possible interventions to make yourself less stupid and less bad.
Second, while "You are wrong about this" is evidence for the "You are stupid and bad" hypothesis if stupid and bad people are more likely to be wrong, I claim that it's very weak evidence. (Although it's possible that I'm wrong about this—and if I'm wrong, it's furthermore possible that the reason I'm wrong is because I'm stupid and bad.)
Exactly how weak evidence is it? It's hard to guess directly, but fortunately, we can use probability theory to reduce the claim into more "atomic" conditional and prior probabilities that might be easier to estimate!
Let W represent the proposition "You are wrong about something", S represent the proposition "You are stupid", and B represent the proposition "You are bad."
By Bayes's theorem, the probability that you are stupid and bad given that you're wrong about something is given by—
P(S,B|W)=P(W|S,B)P(S,B)P(W|S,B)P(S,B)+P(W|S,¬B)P(S,¬B)+P(W|¬S,B)P(¬S,B)+P(W|¬S,¬B)P(¬S,¬B)
For the purposes of this calculation, let's assume that badness and stupidity are statistically independent. I doubt this is true in the real world, but because I'm stupid and bad (at math), I want that simplifying assumption to make the algebra easier for me. That lets us unpack the conjunctions, giving us—
P(S,B|W)=P(W|S,B)P(S)P(B)P(W|S,B)P(S)P(B)+P(W|S,¬B)P(S)P(¬B)+P(W|¬S,B)P(¬S)P(B)+P(W|¬S,¬B)P(¬S)P(¬B)
This expression has six degrees of freedom: P(S), P(B), P(W|S,B), P(W|S,¬B), P(W|¬S,B), P(W|¬S,¬B). Arguing about the values of these six individual parameters is probably more productive than arguing about the value of P(S,B|W) directly!
Suppose half the people are stupid (P(S)=0.5), one-tenth of people are bad (P(B)=0.1), and that most people are wrong, but that being stupid or bad each make you somewhat more likely to be wrong, to the tune of P(W|¬S,¬B)=0.8, P(W|S,¬B)=P(W|¬S,B)=0.85, and P(W|S,B)=0.9. So our posterior probabilty that someone is stupid and bad given that they were wrong once is
P(S,B|W)=(0.9)(0.5)(0.1)(0.9)(0.5)(0.1)+(0.85)(0.5)(0.9)+(0.85)(0.5)(0.1)+(0.8)(0.5)(0.9)
≈0.0542
But the base rate of being stupid and bad is (0.1)(0.5) = 0.05. Learning that someone was wrong only raised our probability that they are stupid and bad by 0.0042. That's a small number that you shouldn't worry about!