[Epistemic status: shortly after writing this post, I thought I'd regret it. It was not rated very highly by LW karma, and on outside view, it's the sort of rant I often don't endorse later on. But re-reading it after a few weeks, I think it holds up. I still endorse everything I've written here, with the exception of my suggestion that "virtue signalling" is an appropriate replacement for what most people mean by "signalling".]
I still feel a strong empathy for the post You Can't Signal to Rubes, which called out LessWrong for using the word "signalling" incorrectly. That post got heavily, and rightly, downvoted because it also got the definition wrong. :( But it had a point!
At the time of writing, the current definition of signalling on the LessWrong tag is:
Signaling is behavior whose main purpose is to demonstrate to others that you possess some desirable trait. For example, a bird performing an impressive mating display signals that it is healthy and has good genes.
I'm not even sure I should correct it, because this does seem to summarize the LessWrong consensus on what signalling means. But we already have a term for signalling desirable properties about yourself: virtue signalling! Maybe you'll object that "virtue signalling" isn't quite right. Ok. But, could you find another word? I would prefer for "signalling" to point to the subject of signalling theory, which I understand to be the game theory of communication (often focusing on evolutionary game theory).
Scott Alexander's What Is Signaling, Really? seems to get most things right:
In conclusion, a signal is a method of conveying information among not-necessarily-trustworthy parties by performing an action which is more likely or less costly if the information is true than if it is not true. Because signals are often costly, they can sometimes lead to a depressing waste of resources, but in other cases they may be the only way to believably convey important information.
Although all of his examples are about signalling self-properties, he never stipulates that, instead always using the more general conveying-information definition. He also avoids the signalling is automatically bad pitfall. Instead, he explains that signalling is often unfortunately costly, but is nonetheless a very useful tool.
However, reading it, I'm not sure whether he means to contrast signalling with "mere assertion", or whether he considers assertion to be a kind of signalling:
Life frequently throws us into situations where we want to convince other people of something. If we are employees, we want to convince bosses we are skillful, honest, and hard-working. If we run the company, we want to convince customers we have superior products. If we are on the dating scene, we want to show potential mates that we are charming, funny, wealthy, interesting, you name it.
In some of these cases, mere assertion goes a long way.[...]
In other cases, mere assertion doesn't work.
[...]
I'll charitably assume that he meant both cases to be types of signalling. But for anyone who was mislead by the wording: signalling is the theory of conveying information! Mere assertions, if they carry information, count as signalling!
So, to summarize the points I've raised so far:
- Sometimes people talk like signalling is just the bad thing (the dishonest or not-maximally-honest practice of making yourself look good).
- Relatedly, people tend to exclude "mere assertion" from signalling, making signaling and literal use of language mutually exclusive.
- Often people restrict signalling to signalling facts about yourself. (In fact, often restricted to status signalling.)
To be honest, I'm not even sure academic uses of the term "signalling" avoid the "mistakes" I'm pointing at! The Wikipedia article Signalling (economics) currently begins with the following:
In contract theory, signalling (or signaling; see spelling differences) is the idea that one party (termed the agent) credibly conveys some information about itself to another party (the principal).
[Note that I've defaulted to the Wikipedia spelling of signalling; spelling on LessWrong seems mixed.]
On the other hand, the page on Signalling Theory (a page which is very biology-focused, despite the broader applicability of the theory) includes examples such as alarm calls (eg, birds warning each other that there is a snake in the grass). These signals cannot be interpreted as facts about the signaller.
Perhaps it is a quirk of economics which restricts the term "signalling" to hidden information about the agent, and LessWrong inherited this restricted sense via Robin Hanson?
Signaling theory as a term in economics or game theory usually refers to the analysis of situations where an agent takes an action that transmits information that some other agent (or rather, the "principal") does not have, and which influences the principal's behavior. The agent is also often called the sender and the principal the receiver of the signal.
Often, this is information about the agent, but sometimes it is information about something else, so we can generally just say it is information about "the state of the world" or "the state of nature". Usually, signaling theory is concerned with situations in which the information cannot be transmitted by "mere assertion" (or "cheap talk", see below) but only by a costly action, and the cost of transmitting information about certain states of the world has to be different from transmitting information about other states of the world in certain ways; e.g. in Spence's job-market signaling model, low-ability workers must have a higher cost of attaining education than high-ability workers, otherwise low-ability workers would also do it and the signal is worthless. (Note that in these models, the agent moves first and the principal second, but still the principal offers a contract based on the received information. If the principal moves first and offers a contract to the informed agent, we are in contract theory. Signaling theory and contract theory together are sometimes referred to as "information economics", "economics of asymmetric information", or sometimes the "theory of incentives".)
Situations in which there are no such costly signals are usually called "cheap talk" models. Of course, if there is no conflict of interest, the informed party can always just transfer the information (and there would also be no need for costly signals then). But suppose there is a conflict of interest between the informed sender and the uninformed receiver. Then which kind of information is transmittable? The seminal paper is by Crawford and Sobel. They show that, basically, very fine-grained information transmission does not work when there is a conflict of interest.
Finally, if a sender can send costless credible signals but can strategically choose which ones, we are in the domain of "Bayesian persuasion" models.
(If you can send signals that are costless and there is no conflict of interest, then we are maybe back in basic statistical theory if the signals are noisy, but I guess there is no room for an economic analysis.)
Sorry, I think that's a misunderstanding. I will edit the part about cheap talk.