My intuition says presenting bad facts or pieces of reasoning is wrong, but withholding good facts or pieces of reasoning is less wrong. I assume most of you agree.

This is a puzzle, because on the face of it, the effect is the same.

Suppose the Walrus and the Carpenter are talking of whether pigs have wings.

Scenario 1: The Carpenter is 80% sure that pigs have wings, but the Walrus wants him to believe that they don't. So the Walrus claims that it's a deep principle of evolution theory that no animal can have wings, and the Carpenter updates to 60%.

Scenario 2: The Carpenter is 60% sure that pigs have wings, and the Walrus wants him to believe that they don't. So the Walrus neglects to mention that he once saw a picture of a winged pig in a book. Learning this would cause the Carpenter to update to 80%, but he doesn't learn this, so he stays at 60%.

In both scenarios, the Walrus chose for the Carpenter's probability to be 60% when he could have chosen for it to be 80%. So what's the difference?

If there isn't any, then we're forced to claim bias (maybe omission bias), which we can then try to overcome.

But in this post I want to try rationalizing the asymmetry. I don't feel that my thinking here is clear, so this is very tentative.

If a man is starving, not giving him a loaf of bread is as deadly as giving him cyanide. But if there are a lot of random objects lying around in the neighborhood, the former deed is less deadly: it's far more likely that one of the random objects is a loaf of bread than that it is an antidote to cyanide.

I believe that, likewise, it is more probable that you'll randomly find a good argument duplicated (conditioning on it makes some future evidence redundant), than that you'll randomly find a bad argument debunked (conditioning on it makes some future counter-evidence relevant). In other words, whether you're uninformed or misinformed, you're equally mistaken; but in an environment where evidence is not independent, it's normally easier to recover from being uninformed than from being misinformed.

The case becomes stronger when you think of it in terms of boundedly rational agents fishing from a common meme pool. If agents can remember or hold in mind fewer pieces of information than they are likely to encounter, pieces of disinformation floating in the pool not only do damage by themselves, but do further damage by displacing pieces of good information.

These are not the only asymmetries. A banal one is that misinforming takes effort and not informing saves effort. And if you're caught misinforming, that makes you look far worse than if you're caught not informing. (But the question is why this should be so. Part of it is that, usually, there are plausible explanations other than bad faith for why one might not inform -- if not, it's called "lying by omission" -- but no such explanations for why one might misinform.) And no doubt there are yet others.

But I think a major part of it has to be that ignorance heals better than confusion when placed in a bigger pool of evidence. Do you agree? Do you think "lies" are worse than "secrets", and if so, why?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 9:11 AM

I think we inherit this idea of "lies are worse than secrets" from classic deontological morality (an act-utilitarian could try to quantify the harm caused by each and compare, so he doesn't have as deep a problem)

In my opinion, a lot of deontological morality is rooted in a system for minimizing blame The guy who wouldn't push the fat man onto the tracks to save the people in the trolley problem ( http://en.wikipedia.org/wiki/Trolley_problem#The_fat_man ) knows that he couldn't be blamed for the trolley victims' death, but he could and would be blamed for the murder of the fat man. Therefore, he concludes that leaving the people on the track to die is more moral. Now, looking at your analogy:

"If a man is starving, not giving him a loaf of bread is as deadly as giving him cyanide. But if there are a lot of random objects lying around in the neighborhood, the former deed is less deadly: it's far more likely that one of the random objects is a loaf of bread than that it is an antidote to cyanide."

If I give a man cyanide, then I am clearly and visibly to blame for his death. From a status point of view, that's social suicide. If I fail to give him a loaf of bread, society isn't going to come knocking down my door to drag me to jail. For one thing, no one will even associate his death with me. For another, everyone else who didn't give him bread will be equally to blame. Therefore in classical morality, giving him cyanide is an evil act and not giving him bread is a neutral or barely-evil act.

Now apply that to the lies versus secrets question. If I tell a lie, and I'm caught, then people have every right to get mad at me. If I don't tell you some information that is necessary for you to succeed, you can never prove that I did it intentionally and everyone else who failed to give you that information is equally guilty. Therefore classical morality considers lying worse than keeping secrets, even in cases where utilitarian morality says I've harmed you exactly the same amount either way.

If we alter the situation to make it easier to pin the blame on me, classical morality starts condemning me more. If I am the sole witness in an important criminal case and I describe the entire scene accurately except that I fail to mention I saw the suspect there with a knife, and then a videotape later shows me at the scene, staring at the suspect, I will get in trouble. In this case, it's easy to blame me for not conveying the information, since I was ritually placed in a position where conveying the information was my sole responsibility and since it can be proven I withheld the information intentionally. And in this situation, most people would consider my withholding of information "immoral".

"If I fail to give him a loaf of bread, society isn't going to come knocking down my door to drag me to jail."

Unfortunately this is not entirely correct. Some hold the social principal that "positive rights" would socially demand such actions and in many circles withholding said loaf of bread would be equivocated to cyanide poisoning.

This ties into the general phenomenon of not acting being considered less wrong (no pun intended) than acting. It's not just lies and secrets - not actively going out to save people's lives is fine, while killing people is wrong. Yet in both cases, people die.

The reason for that philosophy is easily understood, however. People are dying all the time, and if you're considered responsible for every death that you fail to prevent, then you should be spending all of your time saving people (or at least working on the most effective way of saving people in the long run). Obviously, folks would rather concentrate on living their own lives and not worry about saving everybody else, so they have manufactured "acting is worse than not acting" as a convenient rationalization. "Not revealing a secret isn't as bad as lying" is just a special case of the general principle.

One might think that the general principle doesn't fully apply in this example. But if you genuinely thought omitting things was as bad as telling a lie, then you should logically spend all of your time going around telling people everything you thought they didn't know, even complete strangers. If we narrow it down to the specific case of discussing a specific matter with someone, in a situation where you're expected to provide accurate information? Well, in that case the general principle doesn't apply as much, but you have to remember that it's a rationalization created to avoid taking true responsibility of things. You'll automatically want to avoid closely examining rationalizations such as those, to avoid noticing them as flawed.

Another problem that make lies worse than keeping secrets is that lies are tailored to be believable - they are often more believable than the truth is unless and until they are specifically investigated.

Also, "The truth is a valuable commodity that we do not automatically owe anyone." From memory from Smith's "Forge of the Elders" - not an argument from fiction, just that I cannot think of a better way of putting it. Of course there are often, I'm tempted to say usually, good reasons to make accurate information available to others, when their actions being based on accurate knowledge improves your well being.

I don't think that the "effort" distinction is banal at all.

The "lying" scenario provides us with much more information about the "liar", than the "keeping secrets" scenario provides us about the "secret keeper". Let me go into this in more detail.

An individual assumes that others have mental states, but that individual has no direct access to those mental states. An individual can only infer mental states through the physical actions of another.

For now, let's assume that an individual who can more accurately infer others mental states from their actions will be "happier" or "more successful" than an individual who cannot.

So, given this assumption, every individual has an incentive to constantly determine others mental states, generalize this into some mental stance, and relate that mental state and mental stance back to the individual.

With these brief preliminaries out of the way, let's examine "lying" vs "secrets".

When a person gives you misinformation, the potential liar takes an active role in trying to affect you negatively. The range of potential mental states and mental stances from this information is relatively small. The person can have a mental stance of "looking out for your best interests" (let's call this mental stance "friendliness") and be mistaken, or the person can have a mental stance of "trying to manipulate you" and be lying. The pathway to determine whether a person is "mistaken" or "lying" is relatively straightforward (compared to secrets), and if we can determine "lying" we can take action to change our relationship with the other.

When a person withholds information that may be helpful; however, we have a much stickier situation. The range of potential mental states is much broader in this situation. The person may be unsure of the accuracy of the information. The person may be unsure of the efficacy of the information to you. The person may be unsure of your willingness to receive this information. In other words, there are many reasons a person may refrain from giving you potentially helpful information and still have a mental stance of "friendliness".

And it would be hard to prove that the withholder of information actually has a mental stance of "eneminess".

Thus, when someone withholds information, our line of inquiry and our course of actions are far less clear than when a person gives us misinformation.

So, in summary, the asymmetry between the two situations is an asymmetry of information. The fact than an individual takes an effort to "lie" to us give us a great deal more information about that individual's mental stance towards us. The person who "keeps a secret" on the other hand, has not given us information about their mental stance towards us.

Hope this helps provoke discussion.

David

Agreed, but for me the intuition that "lying" is less wise than "keeping secrets" doesn't fully disappear when I assume that I don't care about consequences for my reputation or other punishments, so I don't think this can be the whole story.

Also, I sort of skirted over the issue by calling it "bad faith", but I don't think there's necessarily a contradiction between "lying" to someone and looking out for their best interests (consider that pigs don't in fact have wings, so the Walrus is manipulating the Carpenter toward a true conclusion), though there often is in practice.

(In case anyone is wondering why I'm putting scare quotes around "lies" and "secrets", it's because I'm thinking more in terms of misleading contributions and non-contributions to an intellectual debate than in terms of more everyday examples. I don't think my comments apply well to things like privacy issues, for example.)

----The person may be unsure of your willingness to receive this information. In other words, there are many reasons a person may refrain from giving you potentially helpful information and still have a mental stance of "friendliness".----

I agree. For instance, if you know that people would over-value your evidence. For instance, what if Walrus believes that Carpenter is over-credulous. He thinks that Carpenter will take the evidence, proclaim 100% certainty that pigs can have wings, and go blow all his money trying to start a flying pig farm. Walrus believes that there will be a higher cost to Carpenter of overconfidence in the belief that pigs have wings, and a lower cost to underconfidence. Consequently, Walrus will keep his evidence to himself because he knows that Carpenter will receive it with bias.

[-][anonymous]15y60

But I think a major part of it has to be that ignorance heals better than confusion when placed in a bigger pool of evidence. Do you agree? Do you think "lies" are worse than "secrets", and if so, why?

I certainly prefer secrets to lies. In most cases I have all the information I require and more, and when I don't, I tend to trust my ability to acquire it by means fair or foul. However, a few faulty pieces of information can cause all sorts of unpredictable impact on reasoning. This either leads to confusion, restarting from scratch or in some cases drastically poor decisions.

I suspect another part of my distaste for lies is that most lies tend to involve people. I hate being lied about so instinctively advocate the 'punish defectors and also somewhat punish those who do not punish defectors' strategy.

Although of course there are scenarios where it applies, the assumption that information increases people's perceived utility is unjustified in most normal social interactions.

I have several times had access to information that could dramatically change someone's perception of a situation, and been told by others - including, in some cases, the person themselves - that it would be morally wrong to reveal the information to them.

I think their viewpoint is that although they might realize that their ultimate utility would be greater if they accepted disturbing information, they like to control when and how they confront that information. In most situations, people can obtain all the information they want without much effort, and if they don't have information, it is usually by choice. Therefore, they feel violated when somebody imposes information on them that they have not sought out.

So, if somebody actively and clearly seeks specific information, I would feel just as bad for concealing as lying. However, if I don't know that somebody wants information, I feel I must consider carefully whether I will - in their view - harm them by revealing it. It seems likely, to me, that our moral reflexes evolved in this sort of social information context rather than one where truth often had direct adaptive value.

"This is a puzzle, because on the face of it, the effect is the same."

If both (A OR B) imply C then A and B are both the same?

Or "actions are morally equivalent if (by some measure) they lead to the same result?"

A surgeon operates on Bob -> Bob dies. A thief stabs Bob with a knife -> Bob dies. The surgeon and the thief are morally equivalent?

The case of the Walrus and the Carpenter is only problematic if we assume that the Walrus is somehow obliged to actually help the Carpenter; but, with that assumption explicit, then Walrus is morally in error in both scenarios.

I think most people, at least in Western society, do not consider their help is owed. As humans, they will often cooperate but they get to choose with whom, when, and to what extent they will cooperate. Sometimes the cooperation is a "random act of kindness"; sometimes they expect reciprocation.

To intentionally do things with intent to harm another is usually discouraged unless in self-defense; to decline to help someone become better off -- is not only morally neutral, it's also necessary. (Consider that every moment that you are helping someone specifically, there exist billions of people whom you are declining to help.)

Everyone must make choices about how they will expend the effort of their day. Some of that effort must go to help themselves else they will not survive and their help will be available to none.

Seems to me that the choice not to provide information is not "less wrong," rather, it is not "wrong" at all.

G.

A banal one is that misinforming takes effort and not informing saves effort.

That's an important distinction. In both scenarios, the Carpenter suffers the same disutility, but the utility for Walrus is higher for "secret" than for "lies" if his utility function values saving effort. Perhaps that's the reason we don't feel morally obligated to walk the streets all day yelling correct information at people even though many of them are uninformed.

However, this rationalization breaks down in a scenario where it takes more effort to keep a secret than to share it (such as an interrogation), although I assume our intuitions regarding such a scenario would likewise change.

I'm glad I took the time to read all the way to the bottom because this is exactly what I wanted to point out.

If the Carpenter must act to misinform, then the Carpenter is busy. If the Carpenter can withhold information effortlessly, then he is free to do something else. The opportunity cost of lying might account for some of the pro-omission rationalization.

Then again, we're not surrounded by so many opportunities in the real world, are we?

But we might consider that this is specifically a conversation about revealing painful truths. Like in a world where the walrus really would be hurt if he found out pigs had wings. Let's say a person can only tell so many painful truths before they lose the ability to affect an individual (or any individual in the same network). If you go around explaining to all your friends how each of them is less than perfect, they might end your friendship (or at least ignore you). So the Carpenter might realize he wants to save his painful truth for a more important truth where the revelation will be important enough to him to offset the change in their relationship.

One thing to consider in the moral judgement of behavior is the role of intention. If a person's behavior is intended toward a specific outcome that is benificent, then it should be considered less wrong than behavior that is intended toward a specific outcome that is malicious, regardless of the actual outcome. In the context of misinforming vs uninforming, if the intended outcome is the same for either behavior, then why should either behavior be deemed as less wrong than the other (especially when the actual outcome is the same)?

In the Walrus and Carpenter scenarios, both scenarios involve Walrus intentionally and successfully obscuring the truth with the same outcome. Whether or not Walrus's intentions are ultimately beneficent or malicious is undetermined. Regardless, I would not consider Walrus's behavior in either scenario less wrong than that of the other scenario.

"A banal one is that misinforming takes effort and not informing saves effort."

I agree with others that this is not banal. In many cases it may take significant effort to not inform as the information may be contrary to the wishes of those receiving it.

This is especially true in hierarchical situations in which lower echelon workers fear reprisal for giving the boss bad news.

This in my mind would make lies and secrets on an even playing field if the impact of the information being withheld was equal to a similar case in which there was misinformation spread. Both can be malicious and both show a control of information towards biased ends.

They are two sides of the same coin.

The main difference is the action and effort involved, but other differences are the possible intent involved, and consequences. Very few moral systems demand action, except perhaps in very limited circumstances, while every moral system I know forbids some actions. The difference is quite obvious: requiring positive action will consume your entire life, while forbidding negative action leaves people free to do as they like, for the most part, and is also far easier to enforce.

There are circumstances where keeping secrets is roughly equivalent to lying in people's minds. For example, as a witness in court, when being advised by an expert to do something that might have negative consequences (even if they deem it the best course of action), if there's something obviously wrong but easily correctable (eg about your appearance, or you forgot your purse) you'd expect your friends to correct you and would be angry if you thought they noticed and said nothing. These seem to have in common that 1) there is clear information asymmetry, and 2) someone has been singled out.

I would also like to point out that proper lying by omission usually involves more effort than plain lying (if your objective was to change someone's beliefs), because it will require a whole lot of true statements and misinterpretable statements to have the same level of effect as a blatant lie. And it would take even more effort to do this in a way such that it wouldn't be obvious what you did if your mark found out the truth.

In both scenarios, the Walrus chose for the Carpenter's probability to be 60% when he could have chosen for it to be 80%. So what's the difference?

When you provide a Bayesian agent information in order to shift its assessment of the probability of some statement S, you do not only affect its assessment of P(S). You also affect its assessment of the probability of various other statements. The two scenarios are not equivalent because they describe different choices between assessments of the probabilities of various statements beyond the statement that pigs have wings (e.g. statements about evolution).

In addition, a Bayesian agent's knowledge about a statement S is not completely captured by its current assessment of P(S); it is captured by all evidence it has accumulated that is relevant to S, and it is not a priori clear whether or not it is possible to do better than this (although it is; see Chapter 18 of Jaynes).

For example, suppose a Bayesian agent is trying to reason about a coin. Consider the following two pieces of prior knowledge:

  • "The coin is fair."
  • "There is a 50% chance that both sides of the coin are heads and a 50% chance that both sides of the coin are tails."

Both pieces of prior knowledge lead to an assessment of 50% for the probability that the coin will show heads if the agent flips it. However, after observing such a coin flip, the agent's assessment of the probability that the coin will show heads if the agent flips it is still 50% with the first piece of prior knowledge but will either be 100% or 0% with the second piece of prior knowledge, depending on whether or not the first flip was heads or tails. Hence, before the outcome of the first coin flip, the agent's assessment of P(heads) does not capture everything the agent knows about the coin.

[-][anonymous]12y00

If one perpetrates falsehood, the territory evidently reflects the truth. If one perpetrates silence, the territory evidently reflects the truth. In either case, the territory is there for whoever has eyes and wits to look at it.

What about if the falsehood is unknown to be such to the speaker, a counter-fact offered in firm belief and good intention?