Lumifer comments on Self-Congratulatory Rationalism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (395)
I think you are talking about what's in local parlance is called a "weak prior" vs a "strong prior". Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.
In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior -- the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior -- it will take much convincing evidence to persuade you that the theory is not correct after all.
Of course, the posterior of a previous update becomes the prior of the next update.
Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
And I don't see why this should be so.
Oh, dear - that's not what I meant at all. I meant that - absent a strong prior - the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It's entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one - there's someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers "hard problem of consciousness", and it took less than ten posts to establish pretty confidently that the same refutations would apply - but as the history of DIPS (defense-independent pitching statistics) shows, it's entirely possible for an idea to be as correct as "the earth is a sphere, not a plane" and nevertheless be taken as prima facie absurd.
(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as "fixing DIPS" than as "showing that DIPS was completely wrongheaded".)
Oh, I agree with that.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is - I am glad to say - so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.
Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating "never assume that someone is arguing in bad faith" and "never assert that someone is arguing in bad faith". (The author also posted a sequel, if you enjoy the first.)
I'm afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.
People tend to update too much in these circumstances: Fundamental attribution error
The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.
If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has -- it's how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.
This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.
The point is that you cannot make a reliable judgement about someone's rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to "stupid"when all attempts have failed, but not before.
I disagree, I don't think this is true.
I think it's true, on roughly these grounds: taking yourself to understand what someone is saying entails thinking that almost all of their beliefs (I mean 'belief' in the broad sense, so as to include my beliefs about the colors of objects in the room) are true. The reason is that unless you assume almost all of a person's (relevant) beliefs are true, the possibility space for judgements about what they mean gets very big, very fast. So if 'generally understanding what someone is telling you' means having a fairly limited possibility space, you only get this on the assumption that the person talking to you has mostly true beliefs. This, of course, doesn't mean they have to be rational in the LW sense, or even very intelligent. The most stupid and irrational (in the LW sense) of us still have mostly true beliefs.
I guess the trick is to imagine what it would be to talk to someone who you thought had on the whole false beliefs. Suppose they said 'pass me the hammer'. What do you think they meant by that? Assuming they have mostly or all false beliefs relevant to the utterance, they don't know what a hammer is or what 'passing' involves. They don't know anything about what's in the room, or who you are, or what you are, or even if they took themselves to be talking to you, or talking at all. The possibility space for what they took themselves to be saying is too large to manage, much larger than, for example, the possibility space including all and only every utterance and thought that's ever been had by anyone. We can say things like 'they may have thought they were talking about cats or black holes or triangles' but even that assumes vastly more truth and reason in the person that we've assumed we can anticipate.
Generally speaking, understanding what a person means implies reconstructing their framework of meaning and reference that exists in their mind as the context to what they said.
Reconstructing such a framework does NOT require that you consider it (or the whole person) sane or rational.
Well, there are two questions here: 1) is it in principle necessary to assume your interlocutors are sane and rational, and 2) is it as a matter of practical necessity a fact that we always do assume our interlocutors are sane and rational. I'm not sure about the first one, but I am pretty sure about the second: the possibility space for reconstructing the meaning of someone speaking to you is only manageable if you assume that they're broadly sane, rational, and have mostly true beliefs. I'd be interested to know which of these you're arguing about.
Also, we should probably taboo 'sane' and 'rational'. People around here have a tendency to use these words in an exaggerated way to mean that someone has a kind of specific training in probability theory, statistics, biases, etc. Obviously people who have none of these things, like people living thousands of years ago, were sane and rational in the conventional sense of these terms, and they had mostly true beliefs even by any standard we would apply today.
The answers to your questions are no and no.
I don't think so. Two counter-examples:
I can discuss fine points of theology with someone without believing in God. For example, I can understand the meaning of the phrase "Jesus' self-sacrifice washes away the original sin" without accepting that Christianity is "mostly true" or "rational".
Consider a psychotherapist talking to a patient, let's say a delusional one. Understanding the delusion does not require the psychotherapist to believe that the patient is sane.
Mixing truth and rationality is a failure mode. To know whether someone statement is true , you have to understand it,ad to understand it, you have to assume the speaker's rationality.
It's also a failure mode to attach "Irrational" directly to beliefs. A belief is rational if it can be supported by an argument, and you don't carry the space of all possible arguments round jn your head,
You're not being imaginative enough: you're thinking about someone with almost all true beliefs (including true beliefs about what Christians tend to say), and a couple of sort of stand out false beliefs about how the universe works as a whole. I want you to imagine talking to someone with mostly false beliefs about the subject at hand. So you can't assume that by 'Jesus' self-sacrifice washes away the original sin' that they're talking about anything you know anything about because you can't assume they are connecting with any theology you've ever heard of. Or even that they're talking about theology. Or even objects or events in any sense you're familiar with.
I think, again, delusional people are remarkable for having some unaccountably false beliefs, not for having mostly false beliefs. People with mostly false beliefs, I think, wouldn't be recognizable even as being conscious or aware of their surroundings (because they're not!).
I believe this disagreement is testable by experiment.
Do elaborate.
If you would more reliably understand what people mean by specifically treating it as the product of a rational and intelligent person, then executing that hack should lead to your observing a much higher rate of rationality and intelligence in discussions than you would previously have predicted. If the thesis is true, many remarks which, using your earlier methodology, you would have dismissed as the product of diseased reasoning will prove to be sound upon further inquiry.
If, however, you execute the hack for a few months and discover no change in the rate at which you discover apparently-wrong remarks to admit to sound interpretations, then TheAncientGeek's thesis would fail the test.
You will also get less feedback on the lines of "you just don't get it"
True, although being told less often that you are missing the point isn't, in and of itself, all that valuable; the value is in getting the point of those who otherwise would have given up on you with a remark along those lines.
(Note that I say "less often"; I was recently told that this criticism of Tom Godwin's "The Cold Equations", which I had invoked in a discussion of "The Ones Who Walk Away From Omelas", missed the point of the story - to which I replied along the lines of, "I get the point, but I don't agree with it.")
That looks like a test of my personal ability to form correct first-impression estimates.
Also "will prove to be sound upon further inquiry" is an iffy part. In practice what usually happens is that statement X turns out to be technically true only under conditions A, B, and C, however in practice there is the effect Y which counterbalances X and the implementation of X is impractical for a variety of reasons, anyway. So, um, was statement X "sound"? X-/
Precisely.
Ah, I see. "Sound" is not the right word for what I mean; what I would expect to occur if the thesis is correct is that statements will prove to be apposite or relevant or useful - that is to say, valuable contributions in the context within which they were uttered. In the case of X, this would hold if the person proposing X believed that those conditions applied in the case described.
A concrete example would be someone who said, "you can divide by zero here" in reaction to someone being confused by a definition of the derivative of a function in terms of the limit of a ratio.
Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.