Person B believes that on average, members of group X are more criminal and violent than members of group Y, but that one cannot make deductions about this to individual cases. He is therefore genuinely horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets. He piously makes an arrangement to meet his niece that will result in her waiting for him on such a street.
Person B happens to believe what is good for them more than person A does. I don't think it follows their rationalisations/mistakes need be consistent with each other. In fact looking at people it seems we can belive and believe we believe all sorts of contradictory things that are "good for us" in some sense or another that when taken seriously would seem to contradict each other. You provided two examples, where his false beliefs didn't match up with gain, this naturally does happen. But I can easily provide counterexamples.
Person X honestly believes that intelligence tests are meaningless, and everyone can acheive anything , yet he will see no problem in using low test scores of a political opponent as a form of mockery, since clearly they really are stupid.
He may consider the preferences of parents who think group Y on average would have an undesirable effect on the values or academic achievement of their child and wish to make sure they have minimal influence on them to be so utterly immoral that must be proactivley fought in personal and public life. But in practice he will never send his children to a school where group Y is a high percentage of the pupils. You see that is because naturally, the school is a bad school and no self respecting parent sends their child to a bad school.
In both cases he manages to do basically the same thing he would have if he was person A.. And I actually think that on the whole type B manages to isolate themselves from some of the fallout of false belief as well as type A. I think that this is because common problems in every day life quickly generate commonly accepted solutions. These solutions may come with explicitly stated rationalizations or they may be unstated practices held up by status quo bias and ugh fields. Person A may even be the one to think of the original rationalization that cloaks rational behaviour based on accurate data! The just mentioned simple conditioning insures that at least some B people will adopt them. If person B happens to wanders into uncommon situations however, he may indeed pay a price.
Naturally an alternative explanation is that a great portion of seemingly B type people are in fact A type people.
...Person X honestly believes that intelligence tests are meaningless, and everyone can acheive anything , yet he will see no problem in using low test scores of a political opponent as a form of mockery, since clearly they really are stupid.
He may consider the preferences of parents who think group Y on average would have an undesirable effect on the values or academic achievement of their child and wish to make sure they have minimal influence on them to be so utterly immoral that must be proactively fought in personal and public life. But in practic
This is thread where I'm trying to figure out a few things about signalling on LessWrong and need some information, so please immediately after reading about the two individuals please answer the poll. The two individuals:
A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety.
B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.
To add a trivial inconvenience that matches the inconvenience of answering the poll before reading on, comments on what I think the two individuals signal,what the trade off is and what I speculate the results might be here versus the general population, is behind this link.