by [anonymous]
1 min read

7

This is thread where I'm trying to figure out a few things about signalling on LessWrong and need some information, so please immediately after reading about the two individuals please answer the poll. The two individuals:


A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety.

B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.

 

To add a trivial inconvenience that matches the inconvenience of answering the poll before reading on, comments on what I think the two individuals signal,what the trade off is and what I speculate the results might be here versus the general population, is behind this link.

New Comment
99 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I found the question for the poll ("Who looks better to you?") to be rather ambiguous. For instance, I'm not sure if the following interpretations are correct:

  • "Who would you rather interact with?"
  • "Who would you rather be?"

Did you intend this?

4malthrin
I agree. Another potential distinction: "Who would you rather be?" versus "Who do you imagine you would be happier if you were?"
2[anonymous]
I was aiming for a question that would produce a simple status assessment. I hoped halo effect would counteract some of the potential problems with the wording. After thinking about it I decided a rather ambiguous but easy to understand question might best capture this without discouraging too many people from contributing, so yes it was intentional. But please I have little experience on poll questions and am not a native speaker of English to boot, so please share any further constructive criticism you might have! :)
0anonym
I couldn't answer for this reason. It's asking "whom do you rate higher [according to criterion X]?" without specifying criterion X.
6[anonymous]
Criterion X is warm fuzzies.
0anonym
One can't get that from "who looks better to you?", except through a lucky guess. It could just as easily have been many other things.

Your question naturally leads to a more general one -- what is it in fact that causes people to develop beliefs that are closer to reality than the respectable consensus (which often makes them seriously disreputable)?

It is certainly not superior intelligence or knowledge. The normal modus operandi for smart people is to acquire practically useful knowledge and act on it, but at the same time, when it comes to any issues that are of more signaling than practical interest, to figure out instinctively what the respectable opinion is and converge on it, no m... (read more)

3sam0345
If one is in no position to act on a delusional belief, then optimal behavior is to believe in the delusional but socially desirable belief with full sincerity. But unless one is aware of reality, one cannot reliably judge which delusion is safe. For example, one of my politically correct Australian nieces assumed it would be safe to walk down a street named Martin Luther King Boulevard. My seemingly equally politically correct American niece would be unlikely to do so.
3CaveJohnson
While the example is basically correct, I fear it might kill more minds than enlighten then. By doing this you basically force breach a compartment in someone's mind. While of course a well intentioned person might hope this would motivate them to fix the leak and clear the water out of the flooded compartment, it might however just help sink the ship a little bit more, by making another part of their cognitive tool kit "dangerous" to use. Browsing through your comment history, there are some comments I disagree with and some I agree with, and certainly the same would be true for you if you looked at my commenting style. I really don't wish to brag, since my style is far from perfect, but perhaps dare I say my approach may be slightly more productive? Though naturally if no commenter's with your style ever appeared I may find myself under overwhelming scrutiny and need to be quieter on some of my contrarian stances.
1Normal_Anomaly
Um, you might want to change that to Martin Luther King boulevard. It took me a minute to realize you weren't talking about an anti-semitic early Protestant.
3sam0345
Someone who sincerely believes is dangerous to himself and everyone around him, classic recent examples being Washington Mutual’s Kerry Killinger, and Countrywide’s Angelo Mozilo. They conned everyone of gigantic amounts of money, but their biggest victims were themselves and their banks. On the other hand, had they not sincerely believed, it is unlikely that they would have been helicoptered up to such wealth and power. If their belief had been feigned and cynical, a lot more of the disappeared money would have stuck to them. But perhaps had their belief been feigned and cynical, they would not have been the beneficiaries of such great regulatory favor. Note that Kerry Killinger and Angelo Mozilo did not come from elite universities, and by all indications, are not very bright. Goldman and Sach bailed out later than they should have, indicating some degree of unfeigned sincerity, but bailed out soon enough, indicating some degree of feigned sincerity and cynical pretense. This suggests that true believers are to be found in both elite and second ranking universities, but more easily found in second ranking universities.
3[anonymous]
This carries all sorts of interesting implications. This seem to be a better way to state some of what I was going for in the third paragraph of my comment. There seems to be some overlap between this and the previous one. This includes meta-contrarians correct? * You are unlucky enough that what is for the vast majority empty signalling is for you practically useful (perhaps vital) knowledge. It may be that the gap between reality and signalling would actually be too great to rationalize for anyone who had practical use for it, you are just the one stuck with it. The effect of this might be in the long term sufficient to hurt the reputation and signalling value of certain professions, economic niches or even entire (sub)cultures.
4Vladimir_M
Yes, especially when we couple it with the fact that smart people have not just more ability, but usually also stronger incentives to optimize their views for signaling value. The smarter you are, the greater is the relative contribution of the signaling value of your views and opinions to your overall status likely to be. On the very top of this scale are people whose primary identity in life is that of prestigious intellectuals. (Unsurprisingly, the views of such people tend to be extremely uniform and confined to a very narrow range of variation.) One puzzle here however is that the level of status-driven intellectual uniformity has varied a lot historically. In the Western world it was certainly far lower, say, a 100 or 150 years ago than today. Reading books from that period, it's clear that a lot of what people said and wrote was driven by signaling rather than matter-of-fact thinking, but the ratio was nothing like the overwhelming preponderance of the former that we see nowadays. It seems like back then, intellectual status-signaling was somehow successfully channeled outside of the main subjects of intellectual disputes, leaving enough room for an honest no-nonsense debate, which is practically nonexistent today in respectable venues outside of hard sciences and technical subjects. I have only some vague and speculative hypotheses about the possible explanations for these historical differences, though. I'm not sure about that. It seems to me that these might be completely independent mechanisms. The first, unlike the second, would stem from a failure of the general mechanisms for handling status and social norms, indicating a more generally dysfunctional personality, while the second one would result in a perfectly functional individual except for this particular quirk consisting of some odd and perhaps disreputable beliefs. Yes, this is indeed an interesting scenario. I can think of a few ongoing examples, although describing them explicitly would pro
4Eugine_Nier
This could just be the nostalgia filter (WARNING: tvtropes), i.e., there were also a lot of pure status signaling works back then, but they have since been forgotten.
5sam0345
I read enough old books to recognize nineteenth century political correctness when I see it (example: Enlightened Imperialism). It is markedly less obnoxious and omnipresent than twenty first century political correctness.
5Vladimir_M
Undoubtedly there were, but I think a fair assessment can be made by observing only people who were recognized as high-status intellectuals in their own day. When I look at books written a century or more ago by people for whom I know that they were recognized as such back then, I simply don't see anything like the uniformity of opinion among practically all people who enjoy similar status today. Moreover, on many topics, it's impossible to find anything written by today's high-status intellectuals that isn't just awful cant with little or no value beyond signaling. (And it's not like I haven't looked for it.) At the same time, older literature on the same topics written by similarly prestigious people is also full of nonsense, but it's also easy to find works that are quite reasonable and matter-of-fact. Even if my conclusions are somehow biased, I don't think they can be explained by a simple nostalgia filter.
3[anonymous]
This would make for a very interesting topic of discussion, in a different context I came to a similar surprising observation. But I think more specific examples, data and perhaps a few citations might prove vital for this. Potentially problematic because the 19th and 20th century are not without cause sometimes referred to in my corner of the world as "half passed history" since their interpretation carries direct political and ideological implications for the present day. I need to think about whether to write down my reply here or PM you regarding this, just wanted to first make this public though, so anyone else interested and willing to risk it has a chance to jump in. :)
1[anonymous]
... Yes, but some societal beliefs are about status distribution. Or to go for a more general argument societies have differing status distributions and emphasise different ways to distribute status. Its perfectly possible that not "dealing with status signalling" in a usual way might actually be an advantage in some societies or would have a workable niche or societal role that isn't available in some other society. Thus basically the system wouldn't really be a failure in such situations, the individual would be by definition functional. Historically (and even today) mental illness seems to me to be an example a category where the status hit seems more or less proportional to societal dysfunction or rather how the person fails to live up to ideals, rather than making any clear distinction between the two. Why would we have differing mechanisms for this? Isn't it easier for the brain to cover both under a simple "avoid socially dysfunctional people" directive?
1Vladimir_M
The important practical distinction is that under the second scenario, the person in question would be perfectly functional until some specific issue came up where his views differ from the respectable consensus. Such a person could stay completely out of trouble by figuring out on what occasions it's advisable to keep his mouth shut. In contrast, the first scenario would imply a personality that's dysfunctional across the board due to his broken handling of status and social norms, with no easy fix. Moreover, it seems to me that broken handling of status and social norms would imply dysfunction in any society. Having problems with authority and being unable to find and maintain friends and allies is a recipe for disaster in any conceivable social order. It is true that some societies might have niche roles for some types of such individuals, but that's an exception that proves the rule.
2[anonymous]
Perhaps this as well: * Strategically it seems a different set point for respectable opinion would be the best way to raise your own status This seems very useful in times of revolutionary change and might explain why large groups of people update so rapidly when the old set point seems to be loosing but still has the upper hand. Trying to change a respectable opinion to a different setting seems a viable way dislodge and disrupt older alliances (as the members will not update simultaneously) allowing new actors to gain power and status rapidly. This may be particularly useful for anyone who "belonged, or still belongs simultaneously, to non-mainstream social circles (or alien societies) that have different status criteria for beliefs".

Persons A are much easier to present to the general public as generating negative utility, and thus compromise the perceived value of a wrong worldview. This makes our job a lot easier when we expose what they are doing.

Also, they are probably far less dangerous when dealing with existential risks. Let's take something that seems common (in the US, at least), say, subscribing to a premillennial eschatology. If we put one of the two in charge of a superpower:

  • Person B might be willing to start or escalate to a global nuclear war, because they are on the &q
... (read more)

There is a tradeoff, and it's basically a question of whether you fear the consequences of stupidity more, or the consequences of defection. I fear stupidity more; opportunities to backstab are uncommon, but opportunities to screw things up accidentally are everywhere.

The odd thing is, our genes and default psychology seem to think we should care more about defection, and less about stupidity. Perhaps because in the evolutionary environment, literal backstabbing was more common, and things like bad driving didn't exist.

4[anonymous]
Smart people can afford to be more novelty seeking. They are more likley to avoid catastrophic failure trying the same novelties compared to ordinary or stupid people. Defection by someone smarter than you is harder to detect or predict. And it may have much worse results. Catastrophic results you may not be able to undo or repair because you are outmatched. Evolution takes into account that you are hackable by the very rare much more intelligent person. I elaborated on the analysis of this scenario here. Try playing a popular board game with someone of average or below average intelligence. Try proposing before the game a set of rule changes. The response will be "no lets keep the default rules to keep it fair", even if the game is broken or imbalanced and even if you can explain to them why this is so. Is this an irrational position for them to take? If you observed a game in a parallel world where everyone is smarter by one or two standard deviations, I think their response would basically be the same. But if you just raise your and their IQ in this universe I think their response may well differ. Can anyone see why I think this is so?

While the example is basically correct, I fear it might kill more minds than enlighten then.

By doing this you basically force breach a compartment in someone's mind.

It is difficult to address the issue of sincere versus pretended socially desirable beliefs without reference to the individual consequences thereof, which needs a reference, however delicate and indirect, to particular consequences that happened to some particular individual, a reference to events that happened to a particular individual that would not have happened if socially desirable beliefs had been true.

I didn't know how to answer this question in the abstract. But I do hope that people like Descartes and Bacon acted more like Person A, so it looks like I'd admire Person A more in these cases.

Obviously I'm assuming that a real danger exists to explain the deception.

Edited because I misread the description of B the first time.

If the other people's wrong interpretation is causing them harm beyond the harm that A is risking, and if the other people could be convinced of their error if A made an effort, it would be altruistic/ethical to be candid. I don't have a strong opinion on how strong the moral obligation to help others be more accurate is on A. With that caveat, A looks better in my eyes because ey figured out something other people didn't and was instrumentally ration enough to keep quiet about it, and there's n... (read more)

B. I prefer people who might not be bright to dishonest, manipulative people.

...did you just call closeted atheists in theocracies dishonest and manipulative?

2[anonymous]
And nearly any other religious minority in such a regime as well.
1play_therapist
Good point. What I should have said is something along the lines of , "I prefer people who might not be bright to people who are dishonest or manipulative for personal gain." There is a big difference between that and "pretending to share the interpretation of reality shared by others for safety reasons." They are two very different questions. Thinking about it more, though- I think it all depends on the circumstances. There are cases where being honest leads to discrimination. I heard that when my uncle, who had a PhD in chemistry, applied for a job at General Mills in the late 1930's or early 1940's, the employment application asked his religious preference. Discrimination against anyone other then Christians was legal and rampant at the time. He was a secular Jew, atheist. He wrote on the application, "I have no preference, I think all religions are lovely." I don't blame him in the least.
7[anonymous]
Like every situation in life ever? Practicality anything that I disclose will shift someone assessments of me on some metric or another, and from what we know of status assessment and things like the halo effect we can infer this is almost never neatly contained. I don't quite see how in principle selective dishonesty in the form of disclosing information that improves my standing rather than hurts it is anything but typical human behaviour. Sure one can argue that its justified only for when I have good reason to believe what I disclosed would be used to judged unfairly. But all people tend to view any heuristics that can discriminate against them as baseless and unfair.
4jimrandomh
I think you don't have a clear picture of what the problem this poll is talking about is, because you're both very practiced at softening unpleasant truths, and firmly in group (B).
3[anonymous]
Would it be fair to say that perhaps we LWers might on the whole be motivated to think up reasons why person A looks better than person B, because a substantial number of us are person A on some matters?
5wedrifid
To the extent that 'manipulative' describes A I reject the usefulness of the term. Not everyone has the luxury of undiscriminating candor. In fact, in most cases those who lack this most rudimentary of social graces cannot be trusted as social allies.
0saturn
What's your opinion about radical honesty?
0play_therapist
I think this comic illustrates what you're talking about in a cute way, http://www.dilbert.com - go to Aug. 31.
0play_therapist
I think it's generally best for people to be diplomatic and tactful. It's generally best to think through the likely consequences of saying things and to filter what one says. I see nothing wrong with telling "white lies" to spare some one's feelings, in general.
[-][anonymous]20

What I think the two choices signal and the trade offs are

A. Sees that an interpretation of reality shared by others is not correct, but tries to pretend otherwise for personal gain and/or safety.

B. Fails to see that an interpretation of reality is shared by others is flawed. He is therefore perfectly honest in sharing the interpretation of reality with others. The reward regime for outward behaviour is the same as with A.

Most people I would guess are discomforted by sustained duplicity. Without us necessarily realizing it, our positions on matters shi... (read more)

7sam0345
I am sorry, this is so wrong, and the only way to prove that it is wrong is to disturb one of the many dangerous wild elephants in the living room. Person A believes that on average, members of group X are more criminal and violent than members of group Y, and that one can make deductions from this to individual cases. However he piously pretends to be horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets. Person B believes that on average, members of group X are more criminal and violent than members of group Y, but that one cannot make deductions about this to individual cases. He is therefore genuinely horrified that a taxi cab driver would not pick up customers from certain streets, a pizza parlor would not deliver to certain streets. He piously makes an arrangement to meet his niece that will result in her waiting for him on such a street. Don't choose to associate with B Banker A believes that on average, members of group X are less likely to repay their loans than members of group Y, even if all other factors appear to be equal, and if all other factors should appear to be equal, suspects that the member of group X has probably stolen someone's identity or is committing some other fraud. Banker A is in charge of other people's money, and is extra careful when lending it to a member of group X, but piously pretends not to be. To avoid it becoming noticeable that he is in fact discriminating, he locates his bank branches in places where few members of group X show up, he advertises his loans in ways that members of group X are less likely to see. Banker B believes that on average, members of group X are equally likely to repay their loans as members of group Y, or if they are not, the fact is irrelevant and should be ignored. He notices that members of group X rarely get loans. He deduces that due to racism, this market is being under served, and promptly sets out to serve the
2[anonymous]
Person B happens to believe what is good for them more than person A does. I don't think it follows their rationalisations/mistakes need be consistent with each other. In fact looking at people it seems we can belive and believe we believe all sorts of contradictory things that are "good for us" in some sense or another that when taken seriously would seem to contradict each other. You provided two examples, where his false beliefs didn't match up with gain, this naturally does happen. But I can easily provide counterexamples. * Person X honestly believes that intelligence tests are meaningless, and everyone can acheive anything , yet he will see no problem in using low test scores of a political opponent as a form of mockery, since clearly they really are stupid. * He may consider the preferences of parents who think group Y on average would have an undesirable effect on the values or academic achievement of their child and wish to make sure they have minimal influence on them to be so utterly immoral that must be proactivley fought in personal and public life. But in practice he will never send his children to a school where group Y is a high percentage of the pupils. You see that is because naturally, the school is a bad school and no self respecting parent sends their child to a bad school. In both cases he manages to do basically the same thing he would have if he was person A.. And I actually think that on the whole type B manages to isolate themselves from some of the fallout of false belief as well as type A. I think that this is because common problems in every day life quickly generate commonly accepted solutions. These solutions may come with explicitly stated rationalizations or they may be unstated practices held up by status quo bias and ugh fields. Person A may even be the one to think of the original rationalization that cloaks rational behaviour based on accurate data! The just mentioned simple conditioning insures that at least some B people w
3sam0345
These sound to me like good reasons for not associating with B. Selective rationality makes it likely he will do bad things for bad reasons and be sincerely unaware that he is doing bad things. He can probably rationalize embezzling my money as glibly as he can rationalize avoiding a “bad school”, whereas if he is person A, and knows perfectly well he does not want his children to associate with group X, he would know if he was cheating you. Rationalization predicts bad behavior. Avoiding the inquisition does not predict bad behavior.
6Vladimir_M
But that's not what I observe in reality. As Konkvistador said, common problems generate commonly accepted solutions. A strong discrepancy between respectable beliefs and reality leads to a common problem, and a specific mode of rationalization then becomes a commonly accepted and socially approved solution. And in my experience, the fact that someone suspends rationality and adopts rationalizations in a commonly accepted way doesn't imply bad character otherwise. In fact, as a reductio ad absurdum of your position, I would point out that a complete rejection of all pious rationalizations that are common today would mean adopting a number of views that are shared only by an infinitesimal minority. But clearly it's absurd to claim that everyone outside of this tiny minority is untrustworthy and of bad character. On the other hand, I agree with you when it comes to rationalizations that are uncommon and not approved tacitly as an unspoken social convention. They are indeed a red flag as you describe.
3sam0345
It is what we observed in the recent banking crisis. To take an even more extreme example, Pol Pot was a true believer who glibly rationalized away discrepancies. One of his rationalizations was that the bad consequences of his policies were the result of comrades betraying him, which led him to torture his comrades to death. Our financial system has just collapsed in a way that suggests that the great majority of those who adopt certain pious rationalizations applicable to the financial system are untrustworthy and of bad character. Certain single payer medical systems are apply an alarming level of involuntary euthanasia, aka murder, and everyone is piously rationalizing it, except for a tiny minority. What I see is a terrifying and extremely dangerous level of bad behavior, glibly justified by pious and politically correct rationalizations. Breathing difficulties in old people are a wide range of complex and extremely urgent problems, frequently difficult to diagnose and expensive to treat, and apt to progress rapidly to death over a few hours. Tracheotomy or similar urgent and immediate surgical treatment is often absolutely necessary, but for administrative reasons single payer medical systems find it very difficult to provide immediate surgery, surgery that is urgent in the sense of right now not urgent in the sense that in a couple of weeks you will eventually be be put on the extra urgent special emergency queue of people waiting to jump the rest of the merely ordinarily urgent special emergency queue. Therefore, old people who show up at the British health care system struggling to breath, are always treated with barbiturates, which of course stops them struggling. The inability to provide emergency surgery gets rationalized by the great majority of all respectable believers in right things, and these rationalizations are an indication of moral failure. Finding that it is administratively difficult for a single payer system to provide certain kinds of tr
5Vladimir_M
I agree with this as a general assessment (though we might argue about the concrete examples). I also see plenty of terrifying and ominous deeds justified by pious rationalizations. However, I still don't see how you can infer bad character from ordinary, everyday rationalizations of the common people. Yes, these collectively add up to an awful tragedy of the commons, and individuals in high positions who do extraordinary misdeeds and employ extraordinary rationalizations are usually worse than lying, cynical climbers. But among the common folk, I really don't see any connection between individual bad character and the regular, universally accepted pious rationalizatons.
5sam0345
We cannot infer bad character from collective consequences of delusive socially approved beliefs. We can infer bad character if delusive beliefs are apt to result individual bad consequences to other people. In the hypothetical case of the person who wholly genuinely believes in some delusive socially approved belief, one can easily see that it would result in bad consequences for friends, acquaintances, and business associates, while advancing the career of the holder of those beliefs: Therefore bad person What however, about the person who semi genuinely believes in some delusive socially approved belief, but has clever rationalizations for acting as if the belief was not true in those situations where the falsity of the belief might afflict him personally? Of course, such rationalizing Bs blur into As. How then shall we tell the difference? The difference is that a true B genuinely considers socially approved beliefs to be true, and therefore righteously imposes the corresponding socially approved behavior on others, while finding rationalizations to avoid it for himself. Therefore evil. Though he applies those clever rationalizations to avoid bad consequences for himself, rationalizes himself avoiding bad consequences, he does not apply those clever rationalizations to avoid bad consequences for his friends, and those he does business with, since he does not have any motivation to fool himself in those cases, and further, by not fooling himself in those cases, he demonstrates genuine allegiance to socially approved beliefs at low cost to himself. But individuals in high positions don't employ extraordinary rationalizations, unless you call the false but socially approved beliefs that everyone is supposed to believe in and most people believe or pretend to believe, "extraordinary". Indeed, these beliefs are socially approved precisely because the powerful find them convenient to do and justify evil. Those in high places perform extraordinary misdeeds by act
4Vladimir_M
You make your case very poignantly. I'll have to think about this a bit more. In particular, it does seem to me that people whom I find exceptionally trustworthy in my own life tend to have at least some serious disagreements with the respectable opinion, or at least aren't prone to stonewalling with pious rationalizations and moral condemnations when presented with arguments against the respectable opinion. But I'm not sure how much this is a selection effect, given that I myself have some opinions that aren't very respectable.
5handoflixue
I don't see any evidence that Person B won't defect just as readily, just that they haven't yet realized that other people are wrong. Maybe Person B is wrong simply out of an easily cured ignorance, and will happily become a "Person A" once that ignorance is cured. In short, I actually know more about the behavior of Person A, and therefore I trust them more. All I know about Person B is that they're ignorant.
1[anonymous]
Remember person A is the odd one in his society. He dosen't share most other peoples map of reality. Other people have very good reasons to doubt his rationality. Easily cured ignorance of person B is all but so. Certainly a particular person B might just have not gotten around to realizing this. But I think generally you are missing what was implied in the comparison. Being person B seems to have greater fitness in certain circumstances. And we know there are mechanism developed in our own minds that help us stay person B. I think we actually know more about the typical person B than just he that he is ignorant. For starters we de facto know he is less rational than A. Secondly its much more likley than with person A, that the mentioned mechanisms are doing their job properly.
1sam0345
But by assumption, his society is irrational, so their reasons for doubting his rationality are themselves irrational. Needless to say, all socially desirable beliefs in our society are of course wonderfully beneficent, but let us instead suppose the society is Soviet Russia, Nazi Germany, or any society that no longer meets our highly enlightened stamp of approval. Are you better off associated with the fake Nazi or the sincere Nazi? Clearly, you are better off associating with the fake Nazi.
8wedrifid
I eat babies. (Translation: Please don't ask rhetorical questions that make me choose between agreeing with you and affiliating with sincere Nazis.)
3christina
Upvoted since I strongly agree. Arguments shouldn't require using such strongly emotionally biasing examples to be persuasive. And you made your point so wonderfully concise.
2Normal_Anomaly
I think the question was mostly intended to be about fake and sincere creationists rather than fake and sincere Nazis.
2[anonymous]
The society given in the example is wrong. But that's not exactly the same as being irrational, I do think however that its probable to say that person A is more rational than the society as a whole. This may be a high or low standard mind you. Now again I dislike the highly charged example, since they narrow down the scope of thinking, but I suppose you do make a vivid case. But how can they know this? If they know this why don't they change? All else being equal an individual being mistaken seem more likley than the societal consensus being wrong. I don't think you realize on just how much human societies agree. Also just because society is wrong, dosen't mean the individual is right. The answer for the typical person living in Nazi Germany would be? Mind you a Nazi Germany where we don't have the benefit of hindsight that the regime will be short lived.
2sam0345
They don't change because their beliefs are politically convenient. Because their beliefs justify the elite exercising power over the less elite. Because their beliefs justify behavior by the elite that serves the interests of members of the elite but destroys society. Searching for an example of suicidal delusions that is not unduly relevant to either today's politics or yesterdays demons - unfortunately, such examples are necessarily obscure. The nineteenth century British belief in benevolent enlightened imperialism justified a transfer of power and wealth from the unenlightened and piratical colonialists, to members of the British establishment more closely associated with the government, the elite and the better schools. Lots of people predicted this ideology would wind up having the consequences that it did have, that the pirates actually governed better, but were, of course, ignored.
1sam0345
If I reference beliefs in our society that might cause harmful effects were they not so wise and enlightened, it also makes vivid case. Indeed, any reference to strikingly harmful effects makes a vivid case. But some people did have the foresight that the regime was going to be short lived, at least towards the end. Nazi strategy was explained in Hitler's widely read book. The plan was to destroy France (done), force a quick peace settlement with the anglophones (failed), and then invade and ethnically cleanse a large part of Russia. The plan was for short wars against a small set of enemies at any one time. When the British sank the Bismark, the plan was in trouble, since Anglophone air and sea superiority made it unlikely that Germany could force a quick peace, or force them to do anything they did not feel like doing, nor force them to refrain from doing anything they might feel like doing. When they sank the Bismark in May 1941, it was apparent that anglophones could reach Germany, and Germany could not effectively reach them. At that point all type A's should have suspected that Germany had lost the war. At Stalingrad, the plan sank without a trace, and every type A must have known that the war was lost. In general, a type A will predict the future better than a type B, since false beliefs lead society to unforseen consequences.
0handoflixue
Ignorance does not imply unintelligent, irrational, etc., much less make a de facto case for them. There's nothing irrational about honestly believing the group-consensus if you don't have the skill foundation to see how it could be wrong. Sure, one should be open about one's ignorance, but you still have to have anticipations to function, and Bayesian evidence suggests "follow the leader" is better than "pick randomly". Especially since, not having the background knowledge in the first place, one would be hard pressed to list choices to pick randomly amongst :)
3sam0345
If someone does not have the skill foundation to see how the group-consensus is wrong, he is ignorant or stupid. Such people are, quite inadvertently, dangerous and harmful. There is no con man worse or more dangerous than a con man who sincerely believes his own scam, and is therefore quite prepared to go down with his ship.
0[anonymous]
This is true in a big way that I haven't mentioned before though. Type B seem to me more likley to cause trouble for anyone attempting to implement solutions that might avert tragedy of the commons situations caused by a false society wide belief, than type A.
-1[anonymous]
Actually he is right. Just because you can't find a flaw with common consensus dosen't mean you are ignorant or stupid because its perfectly possible there is no flaw with common consensus on a particular subject or that the flaw is too difficult to detect by the means available to you. Perhaps its too difficult to detect the flaw with the means the entire society has available to it! A rational agent is not an omniscient agent after all! I think you may be letting yourself slightly adversarial in your thinking here because you perceive this as a fight over a specific thing you estimate society is delusional about. Its not, its really not. Chill man. :) Edit: Considering the downvotes, I just want to ask what I missing in this comment? Thanks for any help!
0[anonymous]
Yes but the odds of A getting the right answer from picking randomly are even lower. ;) Remember person A was defined in this example as having a better map on this little spot, though I suppose most of the analysis done by people so far works equally well for someone who thinks he has a better map and is hiding it.
0handoflixue
So Person A believes in MWI because they read the Quantum Mechanics sequence, and Person B never thought about it beyond an article in Discover Magazine saying all the top scientists favor the Copenhagen interpretation. They're both being entirely rational about the information they have, even if Person A has the right answer :)
0[anonymous]
I suppose they are in a sense, but what exactly are the rewards/lack of benefit for a layman, even an educated one, believing or not in MWI? .I think a major indicator is that I haven't heard in recent years of anyone been outed as a MWIist and loosing their job as a consequence :P Nitpick: The average person who has read QM sequence is likley above average in rationality.
7sam0345
Everyone is avoiding realistic examples, for fear that if they should disturb any of the several large elephants in the living room, they will immediately be trampled.
2handoflixue
Substitute a relevant example as needed, I'm simply trying to make the point that ignorance != irrationality. Someone who simply has more information on a field is going to reach better conclusions, and will thus need to hide controversial opinions. Someone with less information is generally going to go with the "follow the herd" strategy, because in the absence of any other evidence, it's their best bet. Thus, just based on knowledge (not rationality!) you're going to see a split between A and B types.
2[anonymous]
There dosen't have to be a correlation of 1 between ignorance and irrationally. There just has to be some positive correlation for us to judge in the absence of other information A probably more rational than B. And if there isn't a correlation greater than 0 between rationality and a proper map of reality, uhm what is this rationality thing anyway?
0handoflixue
Ahhh, you're meaning "we have Bayesian evidence that Person B is less likely to be rational than Person A"? I'd agree, but I still think it's weak evidence if you're only looking at a single situation, and I'd still feel I therefore know more about Person A (how they handle these situations) than I do about Person B (merely that they are either ignorant or irrational). How someone handles a situation strikes me as a more consistent trait, whereas most people seem to have enough gaps in their knowledge that a single gap is very little evidence for other gaps.
0[anonymous]
Yeah I should have been more explicit on that, sorry for the miscommunication! Perhaps for convenience we can add that person A and B are exposed to the same information? It dosen't change the spirit of the thought experiment. I was originally implicitly operating with that as given but since we started discussing it I've noticed I never explicitly mentioned it. Basically I wanted to compare what kinds of things person A/B would signal in a certain set of circumstances to others.
1handoflixue
No worries. I think part of it was on me as well :)
3christina
Person B, but the magnitude of the distinction I make will probably be highly context dependent. At least that was my original answer. Now I lean toward mostly person B, and sometimes person A. There are a number of reasons to be wary of person A. While they will likely make for a more interesting story character, in real life their behavior can cause a number of serious problems. First, while we should presumably fairly distinguish between person A and person C (where person C only thinks they see that an interpretation of reality shared by others is not correct, and tries to pretend otherwise for personal gain and/or safety), any one individual is unlikely to be able to tell whether they are A or C, and will tend to default to believing they are A. If we set person A and person C to be criminal-killing vigilantes, for example, I think it then becomes clear that we, as a society, must take certain steps to prevent the behaviors of person C from being encouraged. Otherwise, the innocent who has been truly acquitted of murder will likely die. Discouraging people from disconnecting from society in the way that both person A and C does is one way to do that. Secrecy in important issues can be a necessity if society itself is broken, but that doesn't make it desirable. The more people you can check your ideas with, the more likely you can correct your mistakes by being shown information you have not considered (or have others correct your mistakes for you, as seen below). If we consider person B now, they are more like the juror in the court who truly believes that an innocent person is guilty of murder. I say this because a person who interfaces honestly in society will tend to be acting within a societal context. Even if they and all their peers convict the innocent person, the convicted may still have a chance to live and possibly to be freed one day. I think that many of the responders to this poll are considering these people and their behaviors in isolation, w
2[anonymous]
PS: Also if you recall my previous comment on finding "friendly" A's is harder than finding friendly B's, it seems to me that if LWers respond as I think they will or even more enthusiastically than that, they will be signalling they consider themselves uniquely gifted in such (cognitive and other) resources. Preferring person A to B seems the better choice only if its rather unlikely that person A is significantly smarter than you or if you are exceptionally good at identifying sociopaths and/or people who share your interests. Choice A is a "rich" man's choice. Someone who can afford to use that status distribution. I hope you can also see that for A's that vastly differ in intelligence/resources/specialized abilities cooperating for common goals is tricky. This seems to me relevant to what the values of someone likley to build a friendly AI seem to be. Has there been discussion or a article that explored these implications that I've missed so far?
3sam0345
In the recent economic crisis, who was more likely to scam you? A or B? The ones that pissed away the largest amounts of other people's money were those that pissed away their own money.
2sam0345
Assume you know the truth, and know or strongly suspect that person A knows the truth but is concealing it. OK. You are on the rocket team in Nazi Germany. You know Nazi Germany is going down in flames. Ostensibly, all good Nazis intend win heroically. You strongly suspect that Dr Wernher von Braun is not, however, a good Nazi. You know he is a lot smarter than you, and you strongly suspect he is issuing lots of complicated lies because of lots of complicated plots. Who then should you stick with?
0EchoingHorror
Why would person A being significantly smarter be a bad thing? Just from the danger of being hacked? I'm not thinking of anything else that would weigh against the extra utility from their intelligence.
0[anonymous]
If you have two agents who can read each others source code they could cooperate on a prisoners dilemma, since they would have assurance as to not defect. Of course we can't read each others source code, but if our intelligence or rather our ability to asses each others honesty is rather matched, the risk for the other side defecting is at its lowest possible point shy of that, (in the absence of more complex stations where we have to think about signalling to other people), wouldn't you agree? When one side is vastly more intelligent/capable, the cost of defection is clearly much much smaller for the more capable side. All else being equal, it seems an A would rather cooperate with a B than another A, because the cost to predict defection is lower. In other words Bs have a discount on needed cognitive resources, despite their inferior maps, and even As have a discount when working with Bs! What I wanted to say with the PS post was that under certain circumstances (say very expensive cognitive resources) opportunity costs associated with a bunch of As cooperating, especially As that have group norms to actively exclude Bs, can't be neglected.
3sam0345
The cost to predict consciously intended defection is lower. I can and have produced numerous examples of Bs unintentionally defecting in our society, but for a less controversial example, let us take a society now deemed horrid. Let us consider the fake Nazi Dr. Wernher von Braun. Dr. Wernher von Braun was an example of A. His associates were examples of Bs. He proceeded to save their lives by lying to them and others, causing them to be captured by the Americans rather than the Russians. The B's around him were busy trying to get him killed, and themselves killed.
0[anonymous]
I generally find it it easier to predict behaviour when people pursue their interests than when they pursue their ideals. If their behaviour matches their interests rather than a set of ideals that they hide, isn't it easier to predict their behaviour?
0saturn
What use can we make of this information? Maybe people on LW like person A because they are aware that there's usually a Sophie's choice that comes along with finding out you were Too Smart For Your Own Good.

I don't think Markdown syntax works for top-level posts.

2[anonymous]
Thanks, I've just noticed my mistake and fixed it. :)

An alternative to the dichotomy.

[-][anonymous]00

person B, but the magnitude of the distinction I make will probably be highly context dependent.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

comment removed by author

[This comment is no longer endorsed by its author]Reply

I'm more inclined to admire B, seeing perhaps a more idealistic streak.

[This comment is no longer endorsed by its author]Reply
2MinibearRex
I misread the first phrase of B. I admire A more. Lying outwardly is not ideal, but it's better to know the truth, and be able to act upon it, than it is to believe something false.
[-][anonymous]00

[Poll on the formatting]

Would it be better if I had put the meat of this contribution in a seperate article, a post in the comments as I did or hadn't separated it in any way from OP and the accompanying poll itself?

4[anonymous]
No separation. We can stop reading long enough to answer a poll first.
4[anonymous]
Post in the comment section of the poll thread.
0[anonymous]
Separate article to which you link.
-8[anonymous]
[-][anonymous]00

Poll for this question (Who looks better to you?).

Edit: Please only publicly comment on your choice elsewhere in the comment section. I want as little extra priming or influence on the people just about to answer the poll as possible. Thank you! :)

[-][anonymous]600

Person A.

2hairyfigment
comment moved
1[anonymous]
Hey mind moving your answer to somewhere else in the comment section? I'm really sorry to bother you with this, but I want as little extra priming or influence on the people just about to answer the poll as possible. :) Edit: Thank you!
[-][anonymous]220

Person B.

0christina
Person B.
0[anonymous]
Hey mind moving your answer to somewhere else in the comment section? I'm really sorry to bother you with this, but I want as little extra priming or influence on the people just about to answer the poll as possible. :) (will delete my post later) Edit: You seem to have originally made a comment there then moved it here. Mind me asking why?
2christina
Sorry about that. Due to the brevity of my original comment, I was thinking of the entire thing as my vote so felt it should go in the poll. I realized after seeing your comment that even this short statement would have more effect on which way others voted than just seeing the top few votes. So I've since separated my comment from my vote and also given a lengthier comment in the comment section.
0[anonymous]
No problem, its my fault since I should have put that into the instructions originally. Thank you!
-60[anonymous]