Vaniver comments on Self-Congratulatory Rationalism - Less Wrong

51 Post author: ChrisHallquist 01 March 2014 08:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (395)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 24 April 2014 06:56:56PM 1 point [-]

First, the issue somewhat drifted from "to what degree should you update on the basis of what looks stupid" to "how careful you need to be about updating your opinion of your opponents in an argument".

I understand PoC to only apply in the latter case, with a broad definition of what constitutes an argument. A teacher, for example, likely should not apply the PoC to their students' answers, and instead be worried about the illusion of transparency and the double illusion of transparency. (Checking the ancestral comment, it's not obvious to me that you wanted to switch contexts- 7EE1D988 and RobinZ both look like they're discussing conservations or arguments- and you may want to be clearer in the future about context changes.)

I am not primarily talking about arguments, I'm talking about the more general case of observing someone being stupid and updating on this basis towards the "this person is stupid" hypothesis.

Here, I think you just need to make fundamental attribution error corrections (as well as any outgroup bias corrections, if those apply).

Given this, who is doing the trusting or distrusting?

Presumably, whatever module sits on the top of the hierarchy (or sufficiently near the top of the ecological web).

Should she tell herself her hardware is untrustworthy and invite Bob overnight?

From just the context given, no, she should trust her intuition. But we could easily alter the context so that she should tell herself that her hardware is untrustworthy and override her intuition- perhaps she has social anxiety or paranoia she's trying to overcome, and a trusted (probably female) friend doesn't get the same threatening vibe from Bob.

True, which is why I want to compare to reality, not to itself. If you decided that Mallory is a malevolent idiot and still happen to observe him later on, well, does he behave like one?

You don't directly perceive reality, though, and your perceptions are determined in part by your behavior, in ways both trivial and subtle. Perhaps Mallory is able to read your perception of him from your actions, and thus behaves cruelly towards you?

As a more mathematical example, in the iterated prisoner's dilemma with noise, TitForTat performs poorly against itself, whereas a forgiving TitForTat performs much better. PoC is the forgiveness that compensates for the noise.

I don't see why.

This is discussed a few paragraphs ago, but this is a good opportunity to formulate it in a way that is more abstract but perhaps clearer: claims about other people's motives or characteristics are often claims about counterfactuals or hypotheticals. Suppose I believe "If I were to greet to Mallory, he would snub me," and thus in order to avoid the status hit I don't say hi to Mallory. In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!

(For the PoC specifically, the hypothetical is generally "if I put extra effort into communicating with Mallory, that effort would be wasted," where the PoC argues that you've probably overestimated the probability that you'll waste effort. This is why RobinZ argues for disengaging with "I don't have the time for this" rather than "I don't think you're worth my time.")

But, as I've been saying in my responses to RobinZ, for me this doesn't fall under the principle of charity, this falls under the principle of "don't be an idiot yourself".

I think that "don't be an idiot" is far too terse a package. It's like boiling down moral instruction to "be good," without any hint that "good" is actually a tremendously complicated concept, and being it a difficult endeavor which is aided by many different strategies. If an earnest youth came to you and asked how to think better, would you tell them just "don't be an idiot" or would you point them to a list of biases and counterbiasing principles?

Comment author: RobinZ 24 April 2014 09:52:24PM 1 point [-]

I think that "don't be an idiot" is far too terse a package.

In Lumifer's defense, this thread demonstrates pretty conclusively that "the principle of charity" is also far too terse a package. (:

Comment author: Vaniver 24 April 2014 11:57:35PM 1 point [-]

For an explanation, agreed; for a label, disagreed. That is, I think it's important to reduce "don't be an idiot" into its many subcomponents, and identify them separately whenever possible,

Comment author: RobinZ 25 April 2014 12:40:01AM 0 points [-]

Mm - that makes sense.

Comment author: Lumifer 25 April 2014 12:54:47AM *  0 points [-]

"the principle of charity" is also far too terse a package

Well, not quite, I think the case here was/is that we just assign different meanings to these words.

P.S. And here is yet another meaning...

Comment author: Lumifer 25 April 2014 12:52:59AM 0 points [-]

perhaps she has social anxiety or paranoia she's trying to overcome

That's not the case where she shouldn't trust her hardware -- that's the case where her software has a known bug.

In order to confirm or disconfirm that belief, I need to alter my behavior; if I don't greet Mallory, then I don't get any evidence!

Sure, so you have to trade off your need to discover more evidence against the cost of doing so. Sometimes it's worth it, sometimes not.

where the PoC argues that you've probably overestimated the probability that you'll waste effort.

Really? For a randomly sampled person, my prior already is that talking to him/her will be wasted effort. And if in addition to that he offers evidence of stupidity, well... I think you underappreciate opportunity costs -- there are a LOT of people around and most of them aren't very interesting.

I think that "don't be an idiot" is far too terse a package.

Yes, but properly unpacking it will take between one and several books at best :-/

Comment author: Vaniver 25 April 2014 02:32:49AM 0 points [-]

That's not the case where she shouldn't trust her hardware -- that's the case where her software has a known bug.

For people, is there a meaningful difference between the two? The primary difference between "your software is buggy" and "your hardware is untrustworthy" that I see is that the first suggests the solution is easier: just patch the bug! It is rarely enough to just know that the problem exists, or what steps you should take to overcome the problem; generally one must train themselves into being someone who copes effectively with the problem (or, rarely, into someone who does not have the problem).

I think you underappreciate opportunity costs -- there are a LOT of people around and most of them aren't very interesting.

I agree there are opportunity costs; I see value in walled gardens. But just because there is value doesn't mean you're not overestimating that value, and we're back to the my root issue that your response to "your judgment of other people might be flawed" seems to be "but I've judged them already, why should I do it twice?"

Yes, but properly unpacking it will take between one and several books at best :-/

Indeed; I have at least a shelf and growing devoted to decision-making and ameliorative psychology.

Comment author: Lumifer 25 April 2014 03:23:08AM 0 points [-]

For people, is there a meaningful difference between the two?

Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.

"but I've judged them already, why should I do it twice?"

I said I will update on the evidence. The difference seems to be that you consider that insufficient -- you want me to actively seek new evidence and I think it's rarely worthwhile.

Comment author: EHeller 25 April 2014 04:25:47AM 2 points [-]

. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.

I don't think this is a meaningful distinction for people. People can (and often do) have personality changes (and other changes of 'mind') after a stroke.

Comment author: Lumifer 25 April 2014 02:55:13PM 0 points [-]

I don't think this is a meaningful distinction for people.

You don't think it's meaningful to model people as having a hardware layer and a software layer? Why?

People can (and often do) have personality changes (and other changes of 'mind') after a stroke.

Why are you surprised that changes (e.g. failures) in hardware affect the software? That seems to be the way these things work, both in biological brains and in digital devices. In fact, humans are unusual in that for them the causality goes both ways: software can and does affect the hardware, too. But hardware affects the software in pretty much every situation where it makes sense to speak of hardware and software.

Comment author: Vaniver 25 April 2014 02:58:08PM 1 point [-]

In more general terms, hardware = brain and software = mind.

Echoing the others, this is more dualistic than I'm comfortable with. It looks to me that in people, you just have 'wetware' that is both hardware and software simultaneously, rather than the crisp distinction that exists between them in silicon.

you want me to actively seek new evidence and I think it's rarely worthwhile.

Correct. I do hope that you noticed that this still relies on a potentially biased judgment (I think it's rarely worthwhile is a counterfactual prediction about what would happen if you did apply the PoC), but beyond that I think we're at mutual understanding.

Comment author: Lumifer 25 April 2014 03:36:30PM 0 points [-]

Echoing the others, this is more dualistic than I'm comfortable with

To quote myself, we're talking about "model[ing] people as having a hardware layer and a software layer". And to quote Monty Python, it's only a model. It is appropriate for some uses and inappropriate for others. For example, I think it's quite appropriate for a neurosurgeon. But it's probably not as useful for thinking about biofeedback, to give another example.

I do hope that you noticed that this still relies on a potentially biased judgment

Of course, but potentially biased judgments is all I have. They are still all I have even if I were to diligently apply PoC everywhere.

Comment author: [deleted] 25 April 2014 03:37:20AM 0 points [-]

Of course. A stroke, for example, is a purely hardware problem. In more general terms, hardware = brain and software = mind.

Huh, I don't think I've ever understood that metaphor before. Thanks. It's oddly dualist.

Comment author: TheAncientGeek 24 April 2014 08:07:42PM -2 points [-]

I'll say it again: the PoC isn't at all about when's worth investing effort in talking to someone.