If a lion could speak, we could not understand her.

—Ludwig Wittgenstein

In order for information to be transmitted from one place to another, it needs to be conveyed by some physical medium: material links of cause and effect that vary in response to variation at the source, correlating the states of different parts of the universe—a "map" that reflects a "territory." When you see a rock, that's only possible because the pattern of light reflected from the rock into your eyes is different from what it would have been if the rock were a different color, or if it weren't there.

This is the rudimentary cognitive technology of perception. Notably, perception only requires technology on the receiving end. Your brain and your eyes were optimized by natural selection to be able to do things like interpreting light as conveying information from elsewhere in the universe. The rock wasn't: rocks were just the same before any animals evolved to see them. The light wasn't, either: light reflected off rocks just the same before, too.

In contrast, the advanced cognitive technology of communication is more capital-intensive: not only the receiver but also the source (now called the "sender") and the medium (now called "signals") must be optimized for the task. When you read a blog post about a rock, not only did the post author need to use the technology of perception to see the rock, you and the author also needed to have a language in common, from which the author would have used different words if the rock were a different color, or if it weren't there.

Like many advanced technologies, communication is fragile and needs to be delicately maintained. A common language requires solving the coordination problem of agreeing on a convention that assigns meanings to signals—and maintaining that convention through continued usage. The existence of stable solutions to the coordination problem ends up depending on the communicating agents' goals, even if the meaning of the convention (should the agents succeed in establishing one) is strictly denotative. If the sender and receiver's interests are aligned, a convention can be discovered by simple reinforcement learning from trial and error. This doesn't work if the sender and receiver's interests diverge—if the sender would profit by making the receiver update in the wrong direction. Deception is parasitic on conventional meaning: it is impossible for there to be a language in which most sentences were lies—because then there could be no way to learn what the "intended" meaning was. The incentive to deceive thus threatens to snowball to undermine the preconditions for signals to refer to anything at all.

There is, however, another way to solve the coordination problem of meaning. If the sender pays different costs for sending different signals, communication between adversaries becomes possible, using an assignment of meanings to signals that makes it more expensive to say things when they aren't true. If somehow granted a telegraph wire, a gazelle and a cheetah would have nothing to say to each other: any gazelle would prefer to have the language to say, "Don't tire yourself out chasing me; I'm too fast"—but precisely because any gazelle would say it, no cheetah would have an incentive to learn Morse code. But if the gazelle leaps in the air with its legs stiffened—higher than weak or injured gazelles could leap—then the message can be received.

Costly signals are both wasteful, and sharply limited in their expressive power: it's hard to imagine doing any complex grammar and logic under such constraints. Is this really the only possible way to talk to people who aren't your friends? The situation turns out not to be nearly that bleak: Michael Lachmann, Szabolcs Számadó, and Carl T. Bergstrom point out that maintaining a convention only requires that departing from it be costly. In the extreme case, if people straight-up died if they ever told a lie, then the things people actually said would be true. More realistically, social sanction against liars is enough to decouple the design of signaling conventions from the enforcement mechanism that holds them in place, enabling the development of complex language. Still, this works better for the aspects of conflicting interests that are verifiable; communication on more contentious issues may fall back to costly signaling.

The fragility of communication lends plausibility to theories that attribute signaling functions to human and other animal behavior. To the novice, this seems counterintuitive and unmotivatedly cynical. "Art is signaling! Charity is signaling! Conversation is signaling!" Really? Why should anyone believe that?

The thing to remember is this: the "signal" in "virtue signal" is the same sense of the same word as the "signal" in "communication signal." Flares are distress signals: if people only fire them in an emergency, then the presence of the flare communicates the danger. In the same way, if more virtuous people are better at virtue signaling, then the presence of the signal indicates virtue. If natural selection designs creatures that both have diverging interests, and have needs to communicate with each other, then those creatures will probably have lots of adaptations for providing expensive-to-fake evidence of the information they need to communicate. That's the only way to do it!

New Comment
13 comments, sorted by Click to highlight new comments since:

It doesn't seem generally true that communication requires delicate maintenance. Liars have existed for thousands of years, and languages have diverged and evolved, and yet we still are able to communicate straightforwardly the vast majority of the time! Like you said, lying loses its effectiveness the more it is used, and so there's a counter-pressure which automatically prevents it from taking over.

Perhaps this analogy will help us talk about things more clearly. We can think of a communication-sphere as being a region with a temperature. Just as a region must be at equilibrium in order to have a temperature at all, so a communication-sphere must have enough interaction that there's a shared sense of meaning and understanding. The higher the temperature, the more uncertainty there is over what is meant in an average interaction. Normal society operates around room temperature, which is quite far from absolute zero. But machinery and computers and life are all able to operate functionally here even so! On Less Wrong, the temperature is around liquid Nitrogen, quite colder, but still not particularly close to absolute zero. People are a lot more careful with the reasoning and meanings here, but various ambiguities of language are still present, as well as some external entropy introduced by deceptive processes. It takes some effort to maintain this temperature, but not so much that it can't exist as a public website. It seems to me like you are advocating that we (but who exactly is unclear) try to bring things down to liquid Helium temperatures, maybe because unusual levels of cooperation like superfluidity become possible. And it is only around here where this temperature becomes fragile, and requires delicate maintenance.

It doesn’t seem generally true that communication requires delicate maintenance. Liars have existed for thousands of years, and languages have diverged and evolved, and yet we still are able to communicate straightforwardly the vast majority of the time! Like you said, lying loses its effectiveness the more it is used, and so there’s a counter-pressure which automatically prevents it from taking over.

It seems to me that there are numerous instances, from the Challenger o-rings to Iraqi WMDs to Lysenkoism, where telling lies has become normalized. Usually followed shortly by catastrophe. You could argue (and I would agree) that such catastrophes are simply part of the "automatic counter-pressure" that allows language to continue to exist. But there's an understandable desire to find other mechanisms that don't require as much suffering and death.

Not all lies are the same.

I think Adele’s framework is slightly better than Zack’s here, but I perhaps agree with you hthat I struggle to use either to describe Lysenkoism, for example, or the expressing the belief that a RBMK reactors are infallible.

Simpler concepts like wish-fulfillment and Yarvin’s Observation (below) seem better at explaining virtual signaling behavior and impression management to me.

"...in many ways nonsense is a more effective organizing tool than the truth. Anyone can believe in the truth. To believe in nonsense is an unforgeable demonstration of loyalty. It serves as a political uniform. And if you have a uniform, you have an army."

[-][anonymous]10

It also depends on what you mean by "liars". A lot of political speech where the speech is itself stating lies seems to be in-group signaling.

For example posting that a certain politician lost the election due to widespread voting fraud isn't really saying you believe there was fraud. It is stating you support that side. The message header itself essentially is the message even if the message body is lies.

Thermodynamics is the deep theory behind steam engine design (and many other things) -- it doesn't tell you how to build a steam engine, but to design a good one you probably need to draw on it somewhat.

This post feels like a gesture at a deep theory behind truth-oriented forum / community design (and many other things) -- it certainly doesn't help tell you how to build one, but you have to think at least around what it talks about to design a good one. Also applicable to many other things, of course.

It also has virtue of being very short. Per-word one of my favorite posts.

"it is impossible for there to be a language in which most sentences were lies"
Is it? If 40% of the time people truthfully described what colour a rock was, and 60% of the time they picked a random colour to falsely describe it as (perhaps some speakers benefit from obscuring the rock's true colour but derive no benefit from false belief in any particular colour), we would have a case where most sentences describing the rock were lies and yet listening to someone describing an unknown rock still allowed you to usefully update your priors. That ability to benefit from communication seems like all that should be necessary for a language to survive.

Agreed. Good counter-example.

I'm very curious as to whether Zac has a way of reformulating his claim to save it.

[-][anonymous]10

What if the lies are biased. And they involve observations about an event you cannot check or it is expensive to do so. As an example, how many ribs do men and women have.

Meta note: I like how this is written. It's much shorter and more concise than a lot of the other posts you wrote in this sequence.

What is being signalled by your use of "her" in the Wittgenstein quotation, rather than "it" as in the Anscombe translation?

Personal whimsy. Probably don't read too much into it. (My ideology has evolved over the years such that I think a lot of the people who are trying to signal something with the generic feminine would not regard me as an ally, but I still love the æsthetic.)

The stronger the common interest in accurate signaling, the higher the opportunity cost of doing something else, which is sufficient for an economic differential signaling cost, though not necessarily an evolutionary one; in some cases where the expected value of honesty far exceeds the expected value of dishonesty we may still want to prevent dishonest behavior from being selected for too fast to reach the payoff.

I am not sure whether it is signficant but the requirment canbe widened to require only compatible interests and interoperability rather than shared interests and architechtue.

If a blog poster used a language uncommon to the reader then the reader could still try to tease out meanings in the text without coordination with the writer. One could also assign meaning additional or adjacent to the meaning the sender intends to convey. If you ask for a water bottle, you are probably thirsty. So the fact that you are signalling something can be a signal for another thing.