7EE1D988 comments on Self-Congratulatory Rationalism - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (395)
Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).
I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.
i tend to express ideas tersely, which counts as poorly-explained if my audience is expecting more verbiage, so they round me off to the nearest cliche and mostly downvote me
i have mostly stopped posting or commenting on lesswrong and stackexchange because of this
like, when i want to say something, i think "i can predict that people will misunderstand and downvote me, but i don't know what improvements i could make to this post to prevent this. sigh."
revisiting this on 2014-03-14, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is
for example, i suspect that the use of more intuitively sensible grammar in this comment (mostly just a lack of capitalization) often discards the frame-message-bit of "i might be intelligent" (or ... something) that such people understand from messages (despite this being an incorrect thing to understand)
Well, you describe the problem as terseness.
If that's true, it suggests that one set of improvements might involve explaining your ideas more fully and providing more of your reasons for considering those ideas true and relevant and important.
Have you tried that?
If so, what has the result been?
-
In other words, you prefer brevity to clarity and being understood? Something's a little skewed here.
It sounds like you and TheOtherDave have both identified the problem. Assuming you know what the problem is, why not fix it?
It may be that you are incorrect about the cause of the problem, but it's easy enough test your hypothesis. The cost is low and the value of the information gained would be high. Either you're right and brevity is your problem, in which case you should be more verbose when you wish to be understood. Or you're wrong and added verbosity would not make people less inclined to "round you off to the nearest cliche", in which case you could look for other changes to your writing that would help readers understand you better.
Well, I think that "be more verbose" is a little like "sell nonapples". A brief post can be expanded in many different directions, and it might not be obvious which directions would be helpful and which would be boring.
I understand this to mean that the only value you see to non-brevity is its higher success at manipulation.
Is that in fact what you meant?
-
What does brevity offer you that makes it worthwhile, even when it impedes communication?
Predicting how communication will fail is generally Really Hard, but it's a good opportunity to refine your models of specific people and groups of people.
improving signal to noise, holding the signal constant, is brevity
when brevity impedes communication, but only with a subset of people, then the reduced signal is because they're not good at understanding brief things, so it is worth not being brief with them, but it's not fun
I suspect that the issue is not terseness, but rather not understanding and bridging the inferential distance between you and your audience. It's hard for me to say more without a specific example.
revisiting this, i consider that perhaps i am likely to discard parts of the frame message and possibly outer message - because, to me of course it's a message, and to me of course the meaning of (say) "belief" is roughly what http://wiki.lesswrong.com/wiki/Belief says it is
I have found great value in re-reading my posts looking for possible similar-sounding cliches, and re-writing to make the post deliberately inconsistent with those.
For example, the previous sentence could be rounded off to the cliche "Avoid cliches in your writing". I tried to avoid that possible interpretation by including "deliberately inconsistent".
I like it - do you know if it works in face-to-face conversations?
This understates the case, even. At different times, an individual can be more or less prone to haste, laziness, or any of several possible sources of error, and at times, you yourself can commit any of these errors. I think the greatest value of a well-formulated principle of charity is that it leads to a general trend of "failure of communication -> correction of failure of communication -> valuable communication" instead of "failure of communication -> termination of communication".
Actually, there's another point you could make along the lines of Jay Smooth's advice about racist remarks, particularly the part starting at 1:23, when you are discussing something in 'public' (e.g. anywhere on the Internet). If I think my opposite number is making bad arguments (e.g. when she is proposing an a priori proof of the existence of a god), I can think of few more convincing avenues to demonstrate to all the spectators that she's full of it than by giving her every possible opportunity to reveal that her argument is not wrong.
Regardless of what benefit you are balancing against a cost, though, a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution. Saying "I don't care what you think" will burn bridges with many non-LessWrongian folk; saying, "This argument seems like a huge time sink" is much less likely to.
So if I believe that someone is stupid, mindkilled, etc. and is not capable (at least at the moment) of contributing anything valuable, does this principle emphasize that I should not believe that, or that I should not tell that to this someone?
Depends. Have you tried charitable interpretations of what they are saying that dont make them stupid, or are you going with your initial reaction?
I'm thinking that charity should not influence epistemology. Adjusting your map for charitable reasons seems like the wrong thing to do.
I think you need to readup on the Principle Of Charity and realise that it's about accurate communication ,not some vague notion of niceness.
That's what my question upthread was about -- is the principle of charity as discussed in this thread a matter of my belief (=map) or is it only about communication?
Research and discover.
How else would you interpret this series of clarifying questions?
I can tell someone the answer, but they may might not believe me. The might better off researching it form reliable sources than trying to figure it out from yet another stupid internet argument.
It's both. You need charity to communicate accurately, and also to form accurate beliefs. The fact that people you haven't been charitable towards seem stupid to you is not reliable data.
It's not obvious to me that's the right distinction to make, but I do think that the principle of charity does actually result in a map shift relative to the default. That is, an epistemic principle of charity is a correction like one would make with the fundamental attribution error: "I have only seen one example of this person doing X, I should restrain my natural tendency to overestimate the resulting update I should make."
That is, if you have not used the principle of charity in reaching the belief that someone else is stupid or mindkilled, then you should not use that belief as reason to not apply the principle of charity.
What is the default? And is it everyone's default, or only the unenlightened ones', or whose?
This implies that the "default" map is wrong -- correct?
I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard -- that I should apply it anyway even if I don't think it's needed?
An attribution error is an attribution error -- if you recognize it you should fix it, and not apply global corrections regardless.
I am pretty sure that most humans are uncharitable in interpreting the skills, motives, and understanding of someone they see as a debate opponent, yes. This observation is basically the complement of the principle of charity- the PoC exists because "most people are too unkind here; you should be kinder to try to correct," and if you have somehow hit the correct level of kindness, then no further change is necessary.
I think that the principle of charity is like other biases.
This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"? This calls to mind the statistical prediction rule results, where people would come up with all sorts of stories why their impression was more accurate than linear fits to the accumulated data- but, of course, those were precisely the times when they should have silenced their inner argument and gone with the more accurate rule. The point of these sorts of things is that you take them seriously, even when you generate rationalizations for why you shouldn't take them seriously!
(There are, of course, times when the rules do not apply, and not every argument against a counterbiasing technique is a rationalization. But you should be doubly suspicious against such arguments.)
It's weird to me that the question is weird to you X-/
You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.
If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.
What is the reality about whether you interpreted someone correct.y? When do you hit the bedrock of Real Meaning?
To me, a fundamental premise of the bias-correction project is "you are running on untrustworthy hardware." That is, biases are not just of academic interest, and not just ways that other people mistakes, but known flaws that you personally should attend to with regards to your own mind.
There's more, but I think in order to explain that better I should jump to this first:
You can ascribe different parts of your cognitive system different levels of trust, and build a hierarchy out of them. To illustrate a simple example, I can model myself as having a 'motive-detection system,' which is normally rather accurate but loses accuracy when used on opponents. Then there's a higher-level system that is a 'bias-detection system' which detects how much accuracy is lost when I use my motive-detection system on opponents. Because this is hierarchical, I think it bottoms out in a finite number of steps; I can use my trusted 'statistical inference' system to verify the results from my 'bias-detection' system, which then informs how I use the results from my 'motive-detection system.'
Suppose I just had the motive-detection system, and learned of PoC. The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased. "All my opponents are malevolent or idiots, because I think they are." The right thing to do would be to construct the bias-detection system, and actively behave in such a way to generate more data to determine whether or not my motive-detection system is inaccurate, and if so, where and by how much. Only after a while of doing this can I begin to trust myself to know whether or not the PoC is needed, because by then I've developed a good sense of how unkind I become when considering my opponents.
If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by either severing the link between my belief in their evil stupidity and my actions when discussing with them, or by discarding that belief and seeing if the evidence causes it to regrow. I word it this way because one needs to move to the place of uncertainty, and then consider the hypotheses, rather than saying "Is my belief that my opponents are malevolent idiots correct? Well, let's consider all the pieces of evidence that come to mind right now: yes, they are evil and stupid! Myth confirmed."
Which brings us to here:
Your cognitive system has a rather large degree of control over the reality that you perceive; to a large extent, that is the point of having a cognitive system. Unless the 'usual way' of verifying the accuracy of your cognitive system takes that into account, which it does not do by default for most humans, then this will not remove most biases. For example, could you detect confirmation bias by checking whether more complete evaluations corroborate your initial perception? Not really- you need to have internalized the idea of 'confirmation bias' in order to define 'more complete evaluations' to mean 'evaluations where I seek out disconfirming evidence also' rather than just 'evaluations where I accumulate more evidence.'
[Edit]: On rereading this comment, the primary conclusion I was going for- that PoC encompasses both procedural and epistemic shifts, which are deeply entwined with each other- is there but not as clear as I would like.
Before I get into the response, let me make a couple of clarifying points.
First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument". I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.
Second, my evaluation of stupidity is based more on how a person argues rather than on what position he holds. To give an example, I know some smart people who have argued against evolution (not in the sense that it doesn't exist, but rather in the sense that the current evolutionary theory is not a good explanation for a bunch of observables). On the other hand, if someone comes in and goes "ha ha duh of course evolution is correct my textbook says so what u dumb?", well then...
I don't like this approach. Mainly this has to do with the fact that unrolling "untrustworthy" makes it very messy.
As you yourself point out, a mind is not a single entity. It is useful to treat is as a set or an ecology of different agents which have different capabilities, often different goals, and typically pull into different directions. Given this, who is doing the trusting or distrusting? And given the major differences between the agents, what does "trust" even mean?
I find this expression is usually used to mean that human mind is not a simple-enough logical calculating machine. My first response to this is duh! and the second one is that this is a good thing.
Consider an example. Alice, a hetero girl, meets Bob at a party. Bob looks fine, speaks the right words, etc. and Alice's conscious mind finds absolutely nothing wrong with the idea of dragging him into her bed. However her gut instincts scream at her to run away fast -- for no good reason that her consciousness can discern. Basically she has a really bad feeling about Bob for no articulable reason. Should she tell herself her hardware is untrustworthy and invite Bob overnight?
True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one? Does additional evidence support your initial reaction? If it does, you can probably trust your initial reactions more. If it does not, you can't and should adjust.
Yes, I know about anchoring and such. But again, at some point you have to trust yourself (or some modules of yourself) because if you can't there is just no firm ground to stand on at all.
I don't see why. Just do the usual Bayesian updating on the evidence. If the weight of the accumulated evidence points out that they are not, well, update. Why do you have to discard your prior in order to do that?
Yep. Which is why the Sequences, the Kahneman & Tversky book, etc. are all very useful. But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".
I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students' answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it's not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they're discussing conservations or arguments- and you may want to be clearer in the future about context changes.)
Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).
Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).
From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she's trying to overcome, and a trusted (probably female) friend doesn't get the same threatening vibe from Bob.
You don't directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?
As a more mathematical example, in the iterated prisoner's dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.
This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people's motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe "If I were to greet to Mallory, he would snub me," and thus in order to avoid the status hit I don't say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!
(For the PoC specifically, the hypothetical is generally "if I put extra effort into communicating with Mallory, that effort would be wasted," where the PoC argues that you've probably overestimated the probability that you'll waste effort. This is why RobinZ argues for disengaging with "I don't have the time for this" rather than "I don't think you're worth my time.")
I think that "don't be an idiot" is far too terse a package. It's like boiling down moral instruction to "be good," without any hint that "good" is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just "don't be an idiot" or would you point them to a list of biases and counterbiasing principles?
tldr; The principle of charity correct biases you're not aware of.
I see that my conception of the "principle of charity" is either non-trivial to articulate or so inchoate as to be substantially altered by my attempts to do so. Bearing that in mind:
The principle of charity isn't a propositional thesis, it's a procedural rule, like the presumption of innocence. It exists because the cost of false positives is high relative to the cost of reducing false positives: the shortest route towards correctness in many cases is the instruction or argumentation of others, many of whom would appear, upon initial contact, to be stupid, mindkilled, dishonest, ignorant, or otherwise unreliable sources upon the subject in question. The behavior proposed by the principle of charity is intended to result in your being able to reliably distinguish between failures of communication and failures of reasoning.
My remark took the above as a basis and proposed behavior to execute in cases where the initial remark strongly suggests that the speaker is thinking irrationally (e.g. an assertion that the modern evolutionary synthesis is grossly incorrect) and your estimate of the time required to evaluate the actual state of the speaker's reasoning processes was more than you are willing to spend. In such a case, what the principle of charity implies are two things:
Minor tyop fix T1503-4.
I don't see it as self-evident. Or, more precisely, in some situations it is, and in other situations it is not.
You are saying (a bit later in your post) that the principle of charity implies two things. The second one is a pure politeness rule and it doesn't seem to me that the fashion of withdrawing from a conversation will help me "reliably distinguish" anything.
As to the first point, you are basically saying I should ignore evidence (or, rather, shift the evidence into the prior and refuse to estimate the posterior). That doesn't help me reliably distinguish anything either.
In fact, I don't see why there should be a particular exception here ("a procedural rule") to the bog-standard practice of updating on evidence. If my updating process is incorrect, I should fix it and not paper it over with special rules for seemingly-stupid people. If it is reasonably OK, I should just go ahead and update. That will not necessarily result in either a "closed question" or a "large posterior" -- it all depends on the particulars.
You're right: it's not self-evident. I'll go ahead and post a followup comment discussing what sort of evidential support the assertion has.
My usage of the terms "prior" and "posterior" was obviously mistaken. What I wanted to communicate with those terms was communicated by the analogies to the dice cup and to the scientific theory: it's perfectly possible for two hypotheses to have the same present probability but different expectations of future change to that probability. I have high confidence that an inexpensive test - lifting the dice cup - will change my beliefs about the value of the die roll by many orders of magnitude, and low confidence that any comparable test exists to affect my confidence regarding the scientific theory.
I think you are talking about what's in local parlance is called a "weak prior" vs a "strong prior". Bayesian updating involves assigning relative importance the the prior and to the evidence. A weak prior is easily changed by even not very significant evidence. On the other hand, it takes a lot of solid evidence to move a strong prior.
In this terminology, your pre-roll estimation of the probability of double sixes is a weak prior -- the evidence of an actual roll will totally overwhelm it. But your estimation of the correctness of the modern evolutionary theory is a strong prior -- it will take much convincing evidence to persuade you that the theory is not correct after all.
Of course, the posterior of a previous update becomes the prior of the next update.
Using this language, then, you are saying that prima facie evidence of someone's stupidity should be a minor update to the strong prior that she is actually a smart, reasonable, and coherent human being.
And I don't see why this should be so.
Because you are not engaged in establishing facts about how smart someone is, you are instead trying to establish facts about what they mean by what they say.
People tend to update too much in these circumstances: Fundamental attribution error
The fundamental attribution error is about underestimating the importance of external drivers (the particular situation, random chance, etc.) and overestimating the importance of internal factors (personality, beliefs, etc.) as an explanation for observed actions.
If a person in a discussion is spewing nonsense, it is rare that external factors are making her do it (other than a variety of mind-altering chemicals). The indicators of stupidity are NOT what position a person argues or how much knowledge about the subject does she has -- it's how she does it. And inability e.g. to follow basic logic is hard to attribute to external factors.
This discussion has got badly derailed. You are taking it that there is some robust fact about someones lack of lrationality or intelligence which may or may not be explained by internal or external factors.
The point is that you cannot make a reliable judgement about someone's rationality or intelligence unless you have understood that they are saying,....and you cannot reliably understand what they ares saying unl ess you treat it as if it were the product of a rational and intelligent person. You can go to "stupid"when all attempts have failed, but not before.
Oh, dear - that's not what I meant at all. I meant that - absent a strong prior - the utterance of a prima facie absurdity should not create a strong prior that the speaker is stupid, unreasonable, or incoherent. It's entirely possible that ten minutes of conversation will suffice to make a strong prior out of this weaker one - there's someone arguing for dualism on a webcomic forum I (in)frequent along the same lines as Chalmers "hard problem of consciousness", and it took less than ten posts to establish pretty confidently that the same refutations would apply - but as the history of DIPS (defense-independent pitching statistics) shows, it's entirely possible for an idea to be as correct as "the earth is a sphere, not a plane" and nevertheless be taken as prima facie absurd.
(As the metaphor implies, DIPS is not quite correct, but it would be more accurate to describe its successors as "fixing DIPS" than as "showing that DIPS was completely wrongheaded".)
Oh, I agree with that.
What I am saying is that evidence of stupidity should lead you to raise your estimates of the probability that the speaker is stupid. The principle of charity should not prevent that from happening. Of course evidence of stupidity should not make you close the case, declare someone irretrievably stupid, and stop considering any further evidence.
As an aside, I treat how a person argues as a much better indicator of stupidity than what he argues. YMMV, of course.
...in the context during which they exhibited the behavior which generated said evidence, of course. In broader contexts, or other contexts? To a much lesser extent, and not (usually) strongly in the strong-prior sense, but again, yes. That you should always be capable of considering further evidence is - I am glad to say - so universally accepted a proposition in this forum that I do not bother to enunciate it, but I take no issue with drawing conclusions from a sufficient body of evidence.
Come to think, you might be amused by this fictional dialogue about a mendacious former politician, illustrating the ridiculousness of conflating "never assume that someone is arguing in bad faith" and "never assert that someone is arguing in bad faith". (The author also posted a sequel, if you enjoy the first.)
I'm afraid that I would have about as much luck barking like a duck as enunciating how I evaluate the intelligence (or reasonableness, or honesty, or...) of those I converse with. YMMV, indeed.
The prior comment leads directly into this one: upon what grounds do I assert that an inexpensive test exists to change my beliefs about the rationality of an unfamiliar discussant? I realize that it is not true in the general case that the plural of anecdote is data, and much the following lacks citations, but:
In other words, I do not often see the case in which performing the tests implied by the principle of charity - e.g. "are you saying [paraphrase]?" - are wasteful, and I frequently see cases where failing to do so has been.
What you are talking about doesn't fall under the principle of charity (in my interpretation of it). It falls under the very general rubric of "don't be stupid yourself".
In particular, considering that the speaker expresses his view within a framework which is different from your default framework is not an application of the principle of charity -- it's an application of the principle "don't be stupid, of course people talk within their frameworks, not within your framework".
I might be arguing for something different than your principle of charity. What I am arguing for - and I realize now that I haven't actually explained a procedure, just motivations for one - is along the following lines:
When somebody says something prima facie wrong, there are several possibilities, both regarding their intended meaning:
...and your ability to infer such:
What my interpretation of the principle of charity suggests as an elementary course of action in this situation is, with an appropriate degree of polite confusion, to ask for clarification or elaboration, and to accompany this request with paraphrases of the most likely interpretations you can identify of their remarks excluding the ones I marked with asterisks.
Depending on their actual intent, this has a good chance of making them:
In the first three or four cases, you have managed to advance the conversation with a well-meaning discussant without insult; in the latter two or three, you have thwarted the goals of an ill-intentioned one - especially, in the last case, because you haven't allowed them the option of distracting everyone from your refutations by claiming you insulted them. (Even if they do so claim, it will be obvious that they have no just cause to be.)
I say this falls under the principle of charity because it involves (a) granting them, at least rhetorically, the best possible motives, and (b) giving them enough of your time and attention to seek engagement with their meaning, not just a lazy gloss of their words.
Minor formatting edit.
Belatedly: I recently discovered that in 2011 I posted a link to an essay on debating charitably by pdf23ds a.k.a. Chris Capel - this is MichaelBishop's summary and this is a repost of the text (the original site went down some time ago). I recall endorsing Capel's essay unreservedly last time I read it; I would be glad to discuss the essay, my prior comments, or any differences that exist between the two if you wish.
I'll say it again: POC doesn't mean "believe everyone is sane and intelligent", it means "treat everyone's comments as though they were made by a sane , intelligent, person".
I don't like this rule. My approach is simpler: attempt to understand what the person means. This does not require me to treat him as sane or intelligent.
How do you know how many mistakes you are or aren't making?
The PoC is a way of breaking down "understand what the other person says" into smaller steps, not .something entirely different, Treating your own mental processes as a black box that always delivers the right answer is a great way to stay in the grip of bias.
Ie, its a defeasible assumption. If you fail, you have evidence that it was a dumb comment. Ift you succeed, you have evidence it wasn't. Either way, you have evidence, and you are not sitting in an echo chamber where your beliefs about people's dumbness go forever untested, because you reject out of hand anything that sounds superficially dumb, .or was made by someone you have labelled , however unjustly,as dumb.
That's fine. I have limited information processing capacity -- my opportunity costs for testing other people's dumbness are fairly high.
In the information age I don't see how anyone can operate without the "this is too stupid to waste time on" pre-filter.
The PoC tends to be advised in the context of philosophy, where there is a background assumption of infinite amounts of time to consider things, The resource-constrained version would be to interpret comments charitably once you have, for whatever reason, got into a discussion....with the corollary of reserving some space for "I might be wrong" where you haven't had the resources to test the hypothesis.
LOL. While ars may be longa, vita is certainly brevis. This is a silly assumption, better suited for theology, perhaps -- it, at least, promises infinte time. :-)
If I were living in English countryside around XVIII century I might have had a different opinion on the matter, but I do not.
It's not a binary either-or situation. I am willing to interpret comments charitably according to my (updateable) prior of how knowledgeable, competent, and reasonable the writer is. In some situations I would stop and ponder, in others I would roll my eyes and move on.
As I operationalize it, that definition effectively waters down the POC to a degree I suspect most POC proponents would be unhappy with.
Sane, intelligent people occasionally say wrong things; in fact, because of selection effects, it might even be that most of the wrong things I see & hear in real life come from sane, intelligent people. So even if I were to decide that someone who's just made a wrong-sounding assertion were sane & intelligent, that wouldn't lead me to treat the assertion substantially more charitably than I otherwise would (and I suspect that the kind of person who likes the(ir conception of the) POC might well say I were being "uncharitable").
Edit: I changed "To my mind" to "As I operationalize it". Also, I guess a shorter form of this comment would be: operationalized like that, I think I effectively am applying the POC already, but it doesn't feel like it from the inside, and I doubt it looks like it from the outside.
You have uncharutably interpreted my formulation to mean 'treat everyone's comments as though they were made by a sane intelligent person who may .or may have been having an off day". What kind of guideline is that?
The charitable version would have been "treat everyone's comments as though they were made by someone sane and intelligent at the time".
(I'm giving myself half a point for anticipating that someone might reckon I was being uncharitable.)
A realistic one.
The thing is, that version actually sounds less charitable to me than my interpretation. Why? Well, I see two reasonable ways to interpret your latest formulation.
The first is to interpret "sane and intelligent" as I normally would, as a property of the person, in which case I don't understand how appending "at the time" makes a meaningful difference. My earlier point that sane, intelligent people say wrong things still applies. Whispering in my ear, "no, seriously, that person who just said the dumb-sounding thing is sane and intelligent right now" is just going to make me say, "right, I'm not denying that; as I said, sanity & intelligence aren't inconsistent with saying something dumb".
The second is to insist that "at the time" really is doing some semantic work here, indicating that I need to interpret "sane and intelligent" differently. But what alternative interpretation makes sense in this context? The obvious alternative is that "at the time" is drawing my attention to whatever wrong-sounding comment was just made. But then "sane and intelligent" is really just a camouflaged assertion of the comment's worthiness, rather than the claimant's, which reduces this formulation of the POC to "treat everyone's comments as though the comments are cogent".
The first interpretation is surely not your intended one because it's equivalent to one you've ruled out. So presumably I have to go with the second interpretation, but it strikes me as transparently uncharitable, because it sounds like a straw version of the POC ("oh, so I'm supposed to treat all comments as cogent, even if they sound idiotic?").
The third alternative, of course, is that I'm overlooking some third sensible interpretation of your latest formulation, but I don't see what it is; your comment's too pithy to point me in the right direction.
But not one that tells you unambiguously what to do, ie not a usable guideline at all.
There's a lot of complaint about this heuristic along the lines that it doesn't guarantee perfect results...ie, its a heuristic
And now there is the complaint that its not realistic, it doesn't reflect reality.
Ideal rationalists can stop reading now.
Everybody else: you're biased. Specifically, overconfident,. Overconfidence makes people overestimate their ability to understand what people are saying, and underestimate the rationality of others. The PoC is a heuristic which corrects those. As a heuristic, an approximate method, it i is based on the principle that overshooting the amount of sense people are making is better than undershooting. Overshooting would be a problem, if there were some goldilocks alternative, some way of getting things exactly right. There isn't. The voice in your head that tells you you are doing just fine its the voice of your bias.
Yep.
You have assumed that cannot be the correct interpretation of the PoC, without saying why. In light of your other comments, it could well be that you are assuming that the PoC can only be true by correspondence to reality or false, by lack of correspondence. But norms, guidelines, heurisitics, advice, lie on an orthogonal axis to true/false: they are guides to action, not passive reflections. Their equivalent of the true/false axis are the Works/Does Not Work axis. So would adoption of the PoC work as way of understanding people, and calibrating your confidence levels?...that is the question.
A small addendum, that I realized I omitted from my prior arguments in favor of the principle of charity:
Because I make a habit of asking for clarification when I don't understand, offering clarification when not understood, and preferring "I don't agree with your assertion" to "you are being stupid", people are happier to talk to me. Among the costs of always responding to what people say instead of your best understanding of what they mean - especially if you are quick to dismiss people when their statements are flawed - is that talking to you becomes costly: I have to word my statements precisely to ensure that I have not said something I do not mean, meant something I did not say, or made claims you will demand support for without support. If, on the other hand, I am confident that you will gladly allow me to correct my errors of presentation, I can simply speak, and fix anything I say wrong as it comes up.
Which, in turn, means that I can learn from a lot of people who would not want to speak to me otherwise.
Again: I completely agree that you should make your best effort to understand what other people actually mean. I do not call this charity -- it sounds like SOP and "just don't be an idiot yourself" to me.
I do see what you are describing as being the standard PoC at all. May I suggest you are call it something else.
How does the thing I am vaguely waving my arms at differ from the "standard PoC"?
If you haven't attempted to falsity your belief by being charitable, then you should stop believing it. It's bad data.