You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Experience of typical mind fallacy. - Less Wrong Discussion

2 Post author: Elo 27 April 2015 06:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 28 April 2015 11:44:07PM *  0 points [-]

The difference is, that if the teacher was aware of what he was doing, he wouldn't do it.

Eh, no, I don't think so. I'm not buying into the "if only people were more self-aware, they would be a lot nicer" theory. Especially with "it's not his fault, he just doesn't know any better" overtones.

because of typical-mind fallacy the smart person will then assume everyone's just kinda alien and terrible in their intentions rather than just slightly worse at carrying intentions out.

No, I still don't think so. A smart person should be able to figure out Hanlon's Razor. I don't know any smart kids who actually had the "all of them are as smart as me, just much more mean" attitude towards others.

I model the other person as accidentally doing 2x3=6.

That's a weird model. If it's "accidental", do you the predict that the next time it will be 4, or 7, or 11, or something random?

My usual starting model for other people is "What are their incentives? What are they trying to do to the best of their ability?" and only in the fairly rare cases of a major mismatch, I start to consider the possibilities that these people might be really clueless or really mean or something like that.

Comment author: someonewrongonthenet 29 April 2015 04:50:34AM *  0 points [-]

I would predict they'll do whatever fails mode they've done in the past, or do the failures which i barely catch myself from doing.

Are you sure that you don't first look at the behavior and then calculate an incentive map? (Which obviously will fit rather well since it is post hoc?) ((Because that's the failure mode most people fall into))(((and doesn't your last paragraph depict a thought process which is the exact opposite of Hanlons razor?)))

Comment author: Lumifer 29 April 2015 02:26:57PM 1 point [-]

Are you sure that you don't first look at the behavior and then calculate an incentive map?

Well, both. Normally I estimate (and update) the model(s) in the middle of an interaction. Before I have no data and have to fall back on priors, and after I have no need for a model.

Are you saying there are, um, methodological problems with this approach?

doesn't your last paragraph depict a thought process which is the exact opposite of Hanlons razor?

Doesn't look like that to me. The opposite of Hanlon's Razor is "I don't understand her therefore she is trying to hurt me". I'm starting by trying to figure out what the person wants and only if I fail I start to consider that she might be clueless (as Hanlon's Razor would suggest) or mean (in case Hanlon's Razor is wrong here).