I'm not sure it is all clear that Macholand not putting down the rebellion is the suboptimal course? As you said, making themselves appear strong and sovereign may have important consequences to international relations, ongoing diplomatic relations with other powers, etc. This wasn't adequately justified, and I think the argument strongly hinges on it. As soon as you said "maybe the international community doesn't really care what Macholand does here" really changed my assessment of the situation. No effective bully bothers to deal with a troublemaker that "is not worth the effort."
You're right. Making the decision to put down the rebellion might indeed be the right one. My goal is not to say what the correct decision is, but instead to point out that making the decision purely on the semiotics of the situation is fallacious.
In other words, it is at least plausible that the cost of putting down the rebellion is more than the benefit of increased respect in international diplomacy. The right way to make the judgement is to weigh these costs against the benefits. But often, people and institutions and countries make decisions based purely on the symbolic meaning of their actions without explicitly accounting for whether these symbolic acts have consequential backing.
Oh it wasn't a criticism of the underlying idea, just feedback for you that the example wasn't being effective for its intended illustrative purpose. And thank you for the "semiotic fallacy" idea, I've already incorporated it into my lexicon.
Call this kind of reasoning the semiotic fallacy: Thinking about the semiotics of possible actions without estimating the consequences of the semiotics.
But you could equally well write a post on the "anti-semiotic fallacy" where you only think about the immediate and obvious consequences of an action, and not about the signals it sends.
I think that rationalists are much more susceptible to the anti-semiotic fallacy in our personal lives. And also to an extent when thinking about global or local politics and economics.
For example, I suspect that I suffered a lot of bullying at school for exactly the reason given in this post: being keen to avoid conflict in early encounters at a school (among other factors).
This was a useful article, and it's nice to know the proper word for it. Let me see if I can add to it slightly.
Maybe a prisoner is on death row, and if they run away they are unlikely to suffer the consequences, since they'll be dead anyway. However, even knowing this, they may still decide to spend their last day on earth nursing bruises, because they value the defiance itself far more than any pain that could be inflicted on them. Perhaps they'd even rather die fighting.
It looks like you don't reflectively endorse actions taken for explicitly semiotic reasons, and lean toward more pure consequentialism. Based only on what you've said, semiotic actions aren't fallacious when they yield outside benefits in the long run, but are fallacious when they don't lead to other good things. (Because you treat semiotic acts as only instrumentally valuable, rather than as terminal values.)
However, it seems likely that some semiotic acts can be good in and of themselves. That is, we reflectively endorse them, rather than just doing them because evolution gave us an impulse to signal which we have a hard time fighting. Semiotic impulse is certainly a human universal, and therefore a part of our current utility function, and it seems plausible that it will survive intact in some form even after more careful examination of our values.
It seems like that sorts of the things we do for explicitly symbolic reasons are more likely to fall into this category than normal subconscious signaling. If we didn't endorse it to some degree, we'd just make sure not to be conscious of doing it, and then keep doing it anyway. To be aware that we're doing it, it can't conflict too much with our positive self-image, or societal values, or anything like that.
Of course, just because we naively support a semiotic act explicitly doesn't mean we still will after closer examination. Maybe we think engagement rings are a touching form of costly signaling at first, but once we understand more about the signaling dynamics at play making us do such things, we decide that conspicuous displays of consumption make society far worse off. You may then decide not to feed Moloch, and try to lessen the keeping up with the Joneses effect.
Personally, I'm rather a fan of the Apollo program, and the idea that long after humanity has killed itself off, the Voyager probe may still survive drifting among the stars, with our last surviving words inscribed in gold.
I think that this fallacy as it is defined above is not quite coherent. Let me begin my argument with some examples of what appears to be semiotic reasoning that have occurred in more recent times:
It's not clear to me that the symbolism of an act can be explicitly stated while at the same time, not considering the consequences of a symbolic act.
What does it actually mean for the symbolic nature of an act to be explicitly acknowledged? Is it the belief that an action A represents, but is not evidence for a statement S? Or is it that A represents and is evidence for S? If we explicitly believe the latter, such as believing A causes S to come about or to become more likely, then symbolic acts have consequential intentions. If it is only the former, then doing A might only be useful if we believed someone else thought it was evidence for S.
For example, rioters and protesters believe their actions convey how unhappy they are and have a chance at bringing their message to more and more people. In this case, A is protesting, while S could be "our wants should be met". A may not be real evidence for S, but it might convince other people that S is true. S could also be "our wants actually being realized." In this case, the protesters may believe S will be causally influenced by A. They may also believe that other people becoming convinced of S (the protesters are making progress, getting attention and influencing people) may bring about more people to their cause. I think largely the same reasoning could be applied to the other examples.
More generally, let's say group A is considering taking an action B that has symbolic meaning (it symbolizes the statement C). Why would they consider taking action B? I can see only the following reasons: They want C to be true and believe B is actual evidence for C (it may cause C to become true), or they want others to believe C is true and that their intended observers of the act interpret B as evidence for C.
One of the examples given - that democracy is symbolic of egalitarianism, therefore we should have democracy - would be extremely silly if we actually believed democracy didn't bring about egalitarianism and that it didn't convince anyone we were more egalitarian.
So it seems like the fallacy above refers not to the consequences of a symbolic act not being considered, but rather that incorrect inferences are being made about the consequences of the action. If such is the case, then it could be due to other failures in human reasoning, and may not be due to a fallacy all on its own.
Love example 2. Maybe there is a name for this already, but you could generalize the semiotic fallacy to arguments where there is an appeal to any motivating idea (whether of a semiotic nature of not) that is exceptionally hard to evaluate from a consequentialist perspective. Example: From my experience, among mathematicians (at least in theoretical computer science, though I'd guess it's the same in other areas) who attempt to justify their work, most end up appealing to the idea of unforeseen connections/usage in the future.
If they appeal to unforeseen connections in the future, then at least one could plausibly reason consequentially for or against it. E.g., you could ask whether the results they discover will remain undiscovered if they don't discover it? Or you could try to calculate what the probability is that a given paper has deep connections down the road by looking at the historical record; calculate the value of these connections; and then ask if the expected utility is really significantly increased by funding more work?
A semiotic-type fallacy occurs when they simply say that we do mathematics because it symbolizes human actualization.
(Sometimes they might say they do mathematics because it is intrinsically worthwhile. That is true. But then the relevant question is whether it is worth funding using public money.)
Thanks for pointing out the Hanson point on how semiotic fallacy is an instance of explicit signaling. That made the whole thing much clearer to understand after I read that part.
Interesting. Clearly your prison and Macholand examples have a game theoretic structure, where the value of your actions is partly influences by what they signal to the other players about your dispositions. It looks a bit like there is a heuristic that helps people choose the option with advantageous signalling value, but they apply it also in cases that don't have the iterated game structure that is required for this to make sense, such as, in particular, 2. This is essentially a different way of phrasing what I take you to be saying.
I'm not sure that this is a fallacy per se. As presented it sounds more like a heuristic than an actual error in decision making, although to be fair I suppose we talk about the sunk cost "fallacy" even in cases where sunk costs are information you should act on, like when forced to reprioritize work that has switching costs.
I wonder how the examples given interacts with the sorts of decision theories popular here? TDT for example is explicitly designed to be context-independent -- "make this decision as if you were deciding for all circumstances." That optimizes for success in the prisoner's dilemma, but appears to be detrimental here. The correct decision is to fight HARD the first time, but be cautiously guarded thereafter. However TDT instructs you to (literally) cut off consideration of this sort of surrounding context. You could hard-wire as input whether it is the first time encountering the situation, but that seems more than a little arbitrary and forced...
Nah, the semiotics of the action really are things which actually happen (physically located in the minds of observers) and which are caused by the action. Decision theories will automatically account for them just like any other effect of an action.
TDT doesn't ignore the surrounding contex, it says "make this decision as if you were deciding for all circumstances which are exactly the same as this one".
That just shifted the problem without resolving it though: How do you decide what makes the circumstances the same or different?
Acknowledgement: This idea is essentially the same as something mentioned in a podcast where Julia Galef interviews Jason Brennan.
You are in a prison. You don't really know how to fight and you don't have very many allies yet. A prison bully comes up to you and threatens you. You have two options: (1) Stand up to the bully and fight. If you do this, you will get hurt, but you will save face. (2) You can try and run away. You might get hurt less badly, but you will lose face.
What should you do?
From reading accounts of former prisoners and also from watching realistic movies and TV shows, it seems like (1) is the better option. The reason is that the semiotics—or the symbolic meaning—of running away has bad consequences down the road. If you run away, you will be seen as weak, and therefore you will be picked on more often and causing more damage down the road.
This is a case where focusing the semiotics on the action is the right decision, because it is underwritten by future consequences.
But consider now a different situation. Suppose a country, call it Macholand, controls some tiny island far away from its mainland. Macholand has a hard time governing the island and the people on the island don't quite like being ruled by Macholand. Suppose, one fine day, the people of the island declare independence from Macholand. Macholand has two options: (1) Send the military over and put down the rebellion; or (2) Allow the island to take its own course.
From a semiotic standpoint, (1) is probably better. It signals that Macholand is strong and powerful country. But from a consequential standpoint, it is at least plausible (2) is a better option. Macholand saves money and manpower by not having to govern that tiny island; the people on the island are happier by being self-governing; and maybe the international community doesn't really care what Macholand does here.
This is a case where focusing on the semiotics can lead to suboptimal outcomes.
Call this kind of reasoning the semiotic fallacy: Thinking about the semiotics of possible actions without estimating the consequences of the semiotics.
I think the semiotic fallacy is widespread in human reasoning. Here are a few examples:
Two comments are in order: