Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Lumifer 25 April 2014 12:52:59AM 0 points [-]

perhaps she has social anxiety or paranoia she's trying to overcome

That's not the case where she shouldn't trust her hardware -- that's the case where her software has a known bug.

In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!

Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it's worth it, sometimes not.

where the PoC argues that you've probably overestimated the probability that you'll waste effort.

Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well... I think you underappreciate opportunity costs -- there are a LOT of people around and most of them aren't very interesting.

I think that "don't be an idiot" is far too terse a package.

Yes, but properly unpacking it will take between one and several books at best :-/

Comment author: Vaniver 25 April 2014 02:32:49AM 0 points [-]

That's not the case where she shouldn't trust her hardware -- that's the case where her software has a known bug.

For people, is there a meaningful difference between the two? The primary difference between "your software is buggy" and "your hardware is untrustworthy" that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).

I think you underappreciate opportunity costs -- there are a LOT of people around and most of them aren't very interesting.

I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn't mean you're not overestimating that value, and we're back to the my root issue that your response to "your judgment of other people might be flawed" seems to be "but I've judged them already, why should I do it twice?"

Yes, but properly unpacking it will take between one and several books at best :-/

Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.

Comment author: RobinZ 24 April 2014 09:52:24PM 1 point [-]

I think that "don't be an idiot" is far too terse a package.

In Lumifer's defense, this thread demonstrates pretty conclusively that "the principle of charity" is also far too terse a package. (:

Comment author: Vaniver 24 April 2014 11:57:35PM 1 point [-]

For an explanation, agreed; for a label, disagreed. That is, I think it's important to reduce "don't be an idiot" into its many subcomponents, and identify them separately whenever possible,

Comment author: Lumifer 24 April 2014 04:10:39PM *  0 points [-]

Before I get into the response, let me make a couple of clarifying points.

First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument". I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.

Second, my evaluation of stupidity is based more on how a person argues rather than on what position he holds. To give an example, I know some smart people who have argued against evolution (not in the sense that it doesn't exist, but rather in the sense that the current evolutionary theory is not a good explanation for a bunch of observables). On the other hand, if someone comes in and goes "ha ha duh of course evolution is correct my textbook says so what u dumb?", well then...

"you are running on untrustworthy hardware."

I don't like this approach. Mainly this has to do with the fact that unrolling "untrustworthy" makes it very messy.

As you yourself point out, a mind is not a single entity. It is useful to treat is as a set or an ecology of different agents which have different capabilities, often different goals, and typically pull into different directions. Given this, who is doing the trusting or distrusting? And given the major differences between the agents, what does "trust" even mean?

I find this expression is usually used to mean that human mind is not a simple-enough logical calculating machine. My first response to this is duh! and the second one is that this is a good thing.

Consider an example. Alice, a hetero girl, meets Bob at a party. Bob looks fine, speaks the right words, etc. and Alice's conscious mind finds absolutely nothing wrong with the idea of dragging him into her bed. However her gut instincts scream at her to run away fast -- for no good reason that her consciousness can discern. Basically she has a really bad feeling about Bob for no articulable reason. Should she tell herself her hardware is untrustworthy and invite Bob overnight?

The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased.

True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one? Does additional evidence support your initial reaction? If it does, you can probably trust your initial reactions more. If it does not, you can't and should adjust.

Yes, I know about anchoring and such. But again, at some point you have to trust yourself (or some modules of yourself) because if you can't there is just no firm ground to stand on at all.

If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by ... discarding that belief and seeing if the evidence causes it to regrow.

I don't see why. Just do the usual Bayesian updating on the evidence. If the weight of the accumulated evidence points out that they are not, well, update. Why do you have to discard your prior in order to do that?

you need to have internalized the idea of 'confirmation bias' in order to define 'more complete evaluations' to mean 'evaluations where I seek out disconfirming evidence also' rather than just 'evaluations where I accumulate more evidence.'

Yep. Which is why the Sequences, the Kahneman & Tversky book, etc. are all very useful. But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".

Comment author: Vaniver 24 April 2014 06:56:56PM 1 point [-]

First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument".

I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students' answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it's not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they're discussing conservations or arguments- and you may want to be clearer in the future about context changes.)

I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.

Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).

Given this, who is doing the trusting or distrusting?

Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).

Should she tell herself her hardware is untrustworthy and invite Bob overnight?

From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she's trying to overcome, and a trusted (probably female) friend doesn't get the same threatening vibe from Bob.

True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one?

You don't directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?

As a more mathematical example, in the iterated prisoner's dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.

I don't see why.

This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people's motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe "If I were to greet to Mallory, he would snub me," and thus in order to avoid the status hit I don't say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!

(For the PoC specifically, the hypothetical is generally "if I put extra effort into communicating with Mallory, that effort would be wasted," where the PoC argues that you've probably overestimated the probability that you'll waste effort. This is why RobinZ argues for disengaging with "I don't have the time for this" rather than "I don't think you're worth my time.")

But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".

I think that "don't be an idiot" is far too terse a package. It's like boiling down moral instruction to "be good," without any hint that "good" is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just "don't be an idiot" or would you point them to a list of biases and counterbiasing principles?

Comment author: TheAncientGeek 24 April 2014 09:55:41AM 1 point [-]

Has anyone noticed that, given the fact that most of the material on this site is esemtially about philosophy, "academic philosophy sucks" is a Crackpot Warning Sign, ie "don't listen to the hidebound establishment".

Comment author: Vaniver 24 April 2014 02:31:31PM *  0 points [-]

You might be interested in this article and this sequence (in particular, the first post of that sequence). "Academic philosophy sucks" is a Crackpot Warning Sign because of the implied brevity. A measured, in-depth criticism is one thing; a smear is another.

Comment author: TheAncientGeek 23 April 2014 07:11:36PM -1 points [-]

Research and discover.

Comment author: Vaniver 23 April 2014 10:08:42PM 1 point [-]

How else would you interpret this series of clarifying questions?

Comment author: Lumifer 23 April 2014 08:57:30PM 0 points [-]

This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"?

It's weird to me that the question is weird to you X-/

You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.

If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.

Comment author: Vaniver 23 April 2014 10:01:37PM *  2 points [-]

It's weird to me that the question is weird to you X-/

To me, a fundamental premise of the bias-correction project is "you are running on untrustworthy hardware." That is, biases are not just of academic interest, and not just ways that other people mistakes, but known flaws that you personally should attend to with regards to your own mind.

There's more, but I think in order to explain that better I should jump to this first:

If you can't trust your cognitive system then you get all tangled up in self-referential loops and really have no basis on which to decide by how much to correct your thinking or even which corrections to apply.

You can ascribe different parts of your cognitive system different levels of trust, and build a hierarchy out of them. To illustrate a simple example, I can model myself as having a 'motive-detection system,' which is normally rather accurate but loses accuracy when used on opponents. Then there's a higher-level system that is a 'bias-detection system' which detects how much accuracy is lost when I use my motive-detection system on opponents. Because this is hierarchical, I think it bottoms out in a finite number of steps; I can use my trusted 'statistical inference' system to verify the results from my 'bias-detection' system, which then informs how I use the results from my 'motive-detection system.'

Suppose I just had the motive-detection system, and learned of PoC. The wrong thing to do would be to compare my motive-detection system to itself, find no discrepancy, and declare myself unbiased. "All my opponents are malevolent or idiots, because I think they are." The right thing to do would be to construct the bias-detection system, and actively behave in such a way to generate more data to determine whether or not my motive-detection system is inaccurate, and if so, where and by how much. Only after a while of doing this can I begin to trust myself to know whether or not the PoC is needed, because by then I've developed a good sense of how unkind I become when considering my opponents.

If I mistakenly believe that my opponents are malevolent idiots, I can only get out of that hole by either severing the link between my belief in their evil stupidity and my actions when discussing with them, or by discarding that belief and seeing if the evidence causes it to regrow. I word it this way because one needs to move to the place of uncertainty, and then consider the hypotheses, rather than saying "Is my belief that my opponents are malevolent idiots correct? Well, let's consider all the pieces of evidence that come to mind right now: yes, they are evil and stupid! Myth confirmed."

Which brings us to here:

You know when and to what degree you can trust your cognitive system in the usual way: you look at what it tells you and test it against the reality. In this particular case you check whether later, more complete evaluations corroborate your initial perception or there is a persistent bias.

Your cognitive system has a rather large degree of control over the reality that you perceive; to a large extent, that is the point of having a cognitive system. Unless the 'usual way' of verifying the accuracy of your cognitive system takes that into account, which it does not do by default for most humans, then this will not remove most biases. For example, could you detect confirmation bias by checking whether more complete evaluations corroborate your initial perception? Not really- you need to have internalized the idea of 'confirmation bias' in order to define 'more complete evaluations' to mean 'evaluations where I seek out disconfirming evidence also' rather than just 'evaluations where I accumulate more evidence.'

[Edit]: On rereading this comment, the primary conclusion I was going for- that PoC encompasses both procedural and epistemic shifts, which are deeply entwined with each other- is there but not as clear as I would like.

Comment author: Lumifer 23 April 2014 07:02:19PM 0 points [-]

the principle of charity does actually result in a map shift relative to the default.

What is the default? And is it everyone's default, or only the unenlightened ones', or whose?

This implies that the "default" map is wrong -- correct?

if you have not used the principle of charity in reaching the belief

I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard -- that I should apply it anyway even if I don't think it's needed?

An attribution error is an attribution error -- if you recognize it you should fix it, and not apply global corrections regardless.

Comment author: Vaniver 23 April 2014 08:44:40PM *  3 points [-]

This implies that the "default" map is wrong -- correct?

I am pretty sure that most humans are uncharitable in interpreting the skills, motives, and understanding of someone they see as a debate opponent, yes. This observation is basically the complement of the principle of charity- the PoC exists because "most people are too unkind here; you should be kinder to try to correct," and if you have somehow hit the correct level of kindness, then no further change is necessary.

I don't quite understand that. When I'm reaching a particular belief, I basically do it to the best of my ability -- if I am aware of errors, biases, etc. I will try to correct them. Are you saying that the principle of charity is special in that regard

I think that the principle of charity is like other biases.

that I should apply it anyway even if I don't think it's needed?

This question seems just weird to me. How do you know you can trust your cognitive system that says "nah, I'm not being biased right now"? This calls to mind the statistical prediction rule results, where people would come up with all sorts of stories why their impression was more accurate than linear fits to the accumulated data- but, of course, those were precisely the times when they should have silenced their inner argument and gone with the more accurate rule. The point of these sorts of things is that you take them seriously, even when you generate rationalizations for why you shouldn't take them seriously!

(There are, of course, times when the rules do not apply, and not every argument against a counterbiasing technique is a rationalization. But you should be doubly suspicious against such arguments.)

Comment author: Lumifer 23 April 2014 04:38:24PM 1 point [-]

a useful principle of charity should emphasize that your failure to engage with someone you don't believe to be sufficiently rational is a matter of the cost of time, not the value of their contribution.

So if I believe that someone is stupid, mindkilled, etc. and is not capable (at least at the moment) of contributing anything valuable, does this principle emphasize that I should not believe that, or that I should not tell that to this someone?

Comment author: Vaniver 23 April 2014 06:36:54PM 2 points [-]

It's not obvious to me that's the right distinction to make, but I do think that the principle of charity does actually result in a map shift relative to the default. That is, an epistemic principle of charity is a correction like one would make with the fundamental attribution error: "I have only seen one example of this person doing X, I should restrain my natural tendency to overestimate the resulting update I should make."

That is, if you have not used the principle of charity in reaching the belief that someone else is stupid or mindkilled, then you should not use that belief as reason to not apply the principle of charity.

Comment author: Gunnar_Zarncke 23 April 2014 08:01:30AM 3 points [-]

Don’t try to find your “true calling” because it’s a false concept.

and

Have a mission: once you have skills, use them to explore options and find something that can be your life’s work and driving motivation.

Initially sounded like a contradiction - "your life’s work and driving motivation" just sounds like "calling" - but the point may be that you should first build skills and then based on that basis find your calling.

Comment author: Vaniver 23 April 2014 02:36:14PM 1 point [-]

the point may be that you should first build skills and then based on that basis find your calling.

I think Newport would still argue with the word 'calling,' as it generally implies that there's some external thing that you are drawn to that is recognizable from far away. "Your life's work and driving motivation" is a much more internal thing- once you have developed the skills and craftsmanship, then you use your creativity on yourself.

Comment author: Gunnar_Zarncke 22 April 2014 10:38:10PM *  0 points [-]

Interestingly I can sit and work on the PC almost the whole day with no problem (I do shift positions a lot and have breaks, move around). But I can't read a book for a comparable time-span without getting neck and shoulder ache. And that despite changing positions a lot more. Reading while standing, while sitting, which laying in all kinds of ways. Does anybody have an idea why?

Comment author: Vaniver 23 April 2014 12:54:52AM 0 points [-]

Do you have a bookstand, or do you hold the book up?

You could compare by, say, holding up your monitor while using the computer. (Not actually recommended, for obvious reasons.)

View more: Next