Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to The Semiotic Fallacy
Comment author: DustinWehr 21 February 2017 02:23:11PM 1 point [-]

Love example 2. Maybe there is a name for this already, but you could generalize the semiotic fallacy to arguments where there is an appeal to any motivating idea (whether of a semiotic nature of not) that is exceptionally hard to evaluate from a consequentialist perspective. Example: From my experience, among mathematicians (at least in theoretical computer science, though I'd guess it's the same in other areas) who attempt to justify their work, most end up appealing to the idea of unforeseen connections/usage in the future.

Comment author: Stabilizer 21 February 2017 11:53:56PM 1 point [-]

If they appeal to unforeseen connections in the future, then at least one could plausibly reason consequentially for or against it. E.g., you could ask whether the results they discover will remain undiscovered if they don't discover it? Or you could try to calculate what the probability is that a given paper has deep connections down the road by looking at the historical record; calculate the value of these connections; and then ask if the expected utility is really significantly increased by funding more work?

A semiotic-type fallacy occurs when they simply say that we do mathematics because it symbolizes human actualization.

(Sometimes they might say they do mathematics because it is intrinsically worthwhile. That is true. But then the relevant question is whether it is worth funding using public money.)

In response to The Semiotic Fallacy
Comment author: username2 21 February 2017 07:36:18AM 4 points [-]

I'm not sure it is all clear that Macholand not putting down the rebellion is the suboptimal course? As you said, making themselves appear strong and sovereign may have important consequences to international relations, ongoing diplomatic relations with other powers, etc. This wasn't adequately justified, and I think the argument strongly hinges on it. As soon as you said "maybe the international community doesn't really care what Macholand does here" really changed my assessment of the situation. No effective bully bothers to deal with a troublemaker that "is not worth the effort."

Comment author: Stabilizer 21 February 2017 07:47:21AM 2 points [-]

You're right. Making the decision to put down the rebellion might indeed be the right one. My goal is not to say what the correct decision is, but instead to point out that making the decision purely on the semiotics of the situation is fallacious.

In other words, it is at least plausible that the cost of putting down the rebellion is more than the benefit of increased respect in international diplomacy. The right way to make the judgement is to weigh these costs against the benefits. But often, people and institutions and countries make decisions based purely on the symbolic meaning of their actions without explicitly accounting for whether these symbolic acts have consequential backing.

The Semiotic Fallacy

19 Stabilizer 21 February 2017 04:50AM

Acknowledgement: This idea is essentially the same as something mentioned in a podcast where Julia Galef interviews Jason Brennan.

You are in a prison. You don't really know how to fight and you don't have very many allies yet. A prison bully comes up to you and threatens you. You have two options: (1) Stand up to the bully and fight. If you do this, you will get hurt, but you will save face. (2) You can try and run away. You might get hurt less badly, but you will lose face.

What should you do?

From reading accounts of former prisoners and also from watching realistic movies and TV shows, it seems like (1) is the better option. The reason is that the semiotics—or the symbolic meaning—of running away has bad consequences down the road. If you run away, you will be seen as weak, and therefore you will be picked on more often and causing more damage down the road.

This is a case where focusing the semiotics on the action is the right decision, because it is underwritten by future consequences.

But consider now a different situation. Suppose a country, call it Macholand, controls some tiny island far away from its mainland. Macholand has a hard time governing the island and the people on the island don't quite like being ruled by Macholand. Suppose, one fine day, the people of the island declare independence from Macholand. Macholand has two options: (1) Send the military over and put down the rebellion; or (2) Allow the island to take its own course.

From a semiotic standpoint, (1) is probably better. It signals that Macholand is strong and powerful country. But from a consequential standpoint, it is at least plausible (2) is a better option. Macholand saves money and manpower by not having to govern that tiny island; the people on the island are happier by being self-governing; and maybe the international community doesn't really care what Macholand does here.

This is a case where focusing on the semiotics can lead to suboptimal outcomes. 

Call this kind of reasoning the semiotic fallacy: Thinking about the semiotics of possible actions without estimating the consequences of the semiotics.

I think the semiotic fallacy is widespread in human reasoning. Here are a few examples:

  1. People argue that democracy is good because it symbolizes egalitarianism. (This is example used in the podcast interview)
  2. People argue that we should build large particle accelerators because it symbolizes human achievement.
  3. People argue that we shouldn't build a wall on the southern border because it symbolizes division.
  4. People argue that we should build a wall on the southern border because it symbolizes national integrity. 

Two comments are in order:

  1. The semiotic fallacy is a special case of errors in reasoning and judgement caused from signaling behaviors (à la Robin Hanson). The distinctive feature of the semiotic fallacy is that the semiotics are explicitly stated during reasoning. Signaling type errors are often subconscious: e.g., if we spend a lot of money on our parents' medical care, we might be doing it for symbolic purposes (i.e., signaling) but we wouldn't say explicitly that that's why we are doing it. In the semiotic fallacy on the other hand, we do explicitly acknowledge the reason we do something is because of its symbolism.
  2. Just like all fallacies, the existence of the fallacy doesn't necessarily mean the final conclusion is wrong. It could be that the semiotics are underwritten by the consequences. Or the conclusion could be true because of completely orthogonal reasons. The fallacy occurs when we ignore, in our reasoning during choice, the need for the consequential undergirding of symbolic acts.
Comment author: satt 11 February 2017 03:14:52PM 0 points [-]

IEPB: "People ought to do X" is your preference because you are assuming "People ought to do X" is a moral fact. It's a different issue whether your assumption is true or false, or justified or unjustified, but the assumption is being made nevertheless.

If my mental model of moral philosophers is correct, this contravenes how moral philosophers usually define/use the phrase "moral fact". Moral facts are supposed to (somehow) inhere in the outside world in a mind-independent way, so the origin of my "People ought to do X" assumption does matter. Because my ultimate justification of such an assumption would be my own preferences (whether or not alloyed with empirical claims about the outside world), I couldn't legitimately call "People ought to do X" a moral fact, as "moral fact" is typically understood.

Consequently I think this line of rebuttal would only be open to Boghossian if he had an idiosyncratic definition of "moral fact". But it is possible that our disagreement reduces to a disagreement over how to define "moral facts".

For example, when you exhort IEPB to not make mediocre philosophy arguments, and say that that's your preference, it's because you are assuming that the claim, "philosophy professors ought not to make mediocre philosophy arguments", is in fact, true.

Introspecting, this feels like a reversal of causality. My own internal perception is that the preference motivates the claim rather than vice versa. (Not that introspection is necessarily reliable evidence here!)

Comment author: Stabilizer 11 February 2017 07:59:18PM 1 point [-]

I agree that any disagreement might come down to what we mean by moral claims.

I don't know Boghossian's own particular commitments, but baseline moral realism is a fairly weak claim without any metaphysics of where these facts come from. I quote from the Stanford Encyclopedia:

Moral realism is not a particular substantive moral view nor does it carry a distinctive metaphysical commitment over and above the commitment that comes with thinking moral claims can be true or false and some are true.

A simple interpretation that I can think of: when you say that you prefer that people do X, typically, you also prefer that other people prefer that people do X. This, you could take as sufficient to say "People ought to do X". (This has the flavor the Kantian categorical imperative. Essentially, I'm proposing a sufficient condition for something to be a moral claim, namely, that it be desired to be universalized. But I don't want to claim that this a necessary condition.)

At any rate, whether the above definition stands or falls, you can see that it doesn't have any metaphysical commitment to some free-floating, human-independent (to be be distinguished from mind-independent) facts embedded in the fabric of the universe. Hopefully, there are other ways of parsing moral claims in such ways so that the metaphysics isn't too demanding.

Comment author: BiasedBayes 10 February 2017 11:50:41AM 1 point [-]

You are right sir. I think we might have different opinions about the ways/angle to approach the issue of right normative moral code. If I interpret it right I would be sceptical about authors idea "to employ our usual mix of argument, intuition and experience" in the light of knowledge of the limits and pitfalls of descriptive moral reasoning.

Comment author: Stabilizer 10 February 2017 06:01:51PM 0 points [-]

Right. Unfortunately, we don't really have any other means of obtaining moral knowledge other than via argument, intuition, and experience. Perhaps your point is that we should emphasize intuition less and argument+experience more.

Comment author: satt 07 February 2017 11:06:54PM *  0 points [-]

Like BiasedBayes, this article reads to me as putting forward a false dichotomy. Unlike BiasedBayes, I don't think that "wellbeing" or "science" have much to do with why I'm unconvinced by the article.

To me the third alternative to the dichotomy is, unsurprisingly, my own view: moral facts don't exist, and "right" & "wrong" are shorthand for behaviour of which I strongly approve or disapprove. My approvals & disapprovals can't be said to be moral facts, because they depend solely on my state of mind, but I'm nonetheless not obliged to become a nihilist because my approvals & disapprovals carry normative import to me, so my uses of "right" & "wrong" are not just descriptive as far as I'm concerned.

I expect Boghossian has a rebuttal, but I can't infer from the article what it would be. I can't imagine a conversation between the two of us that doesn't go in circles or leave me with the last word.

Me: Moral facts don't real. And yet, no logic compels me to be a nihilist. Checkmate, perfessor!

Imagined extrapolation of Paul Boghossian: But if there are no moral facts, any uses of ideas like "right", "wrong", or "should" just become descriptions of what someone thinks or feels. This leaves you bereft of normative vocabulary and hence a nihilist.

Me: Uses of "right", "wrong", or "should" are descriptions of how someone thinks or feels, at least when I use them. Specifically, they're descriptions of how I think or feel. But they aren't just that.

IEPB: So what's that extra normative component? Where does it come from?

Me: Well, it comes from me. I mentally promote certain actions (& non-actions) to the level of obligations or duties, or at least things which should be encouraged, whether or not I (or others) actually fulfil those obligations or duties.

IEPB: This is reminiscent of the example I gave in my article of etiquette, which derives its normative force from the hidden moral fact (absolute norm) that "we ought not, other things being equal, offend our hosts".

Me: If that analogy works, there must be some moral fact hidden in my mental-promotion-to-duty conception of right & wrong. Suppose for a moment that that's so. Start with the observation that my conception of right is basically "that is the right thing to do, in that it is something I approve of so strongly that I regard it as an obligation, or something approaching an obligation, binding on me/you". Digging into that, what's the underlying "moral fact" there? Presumably it's something like "we ought to do things that satt strongly approves of, and not do things that satt strongly disapproves of". But that's obviously not a moral fact, because it's obviously partial and dependent on one specific person's state of mind.

IEPB: Which means it's not normative, it's just a description of someone's mind. So you have no basis for normative judgements. You're a nihilist in denial.

Me: If I'm incapable of making normative judgements, how do you explain my judgement that you shouldn't make mediocre philosophical arguments, because I strongly disapprove of them?

IEPB: Har har. That's not a normative judgement. That's just a description of your state of mind.

Me: Not "just"! It's an assertion that you're obliged to not make mediocre philosophical arguments!

IEPB: Obliged in what way?

Me: Obliged in that I'm telling you you're obliged!

IEPB: That's not an obligation, that's just you expressing your preferences.

Me: No, because there's an explicit extra component to what I'm expressing. Your "just"ing would be correct if I were saying, for example, that I don't like chocolate. But I'm not merely passively observing that I don't approve of mediocre philosophical arguments. I'm telling you to desist from making them.

IEPB: I don't disagree that you're telling me that. Nor would any rational listener to this conversation. But "satt is telling me to desist" is "just a descriptive remark that carries no normative import whatsoever", quoting my article, which you did read, right?

Me: As a matter of fact I did. But like I say, I'm not (just) making the bland descriptive claim which anyone with ears would agree with. I'm carrying out the first-order action of commanding you, in the earnest hope that you will listen & obey, to refrain from an action.

IEPB: Big whoop. Anybody can give an order.

Me: That you're unmoved by my order doesn't make it any less normative. Compare a realm where we both agree that there are facts: empirical investigation of reality. If I told you that gravity made things fall downwards, that would still have force (lol) as a positive, empirical claim, whether you agreed or not. Likewise, when I tell you to knock off some behaviour, that still has force as a normative claim, whether you agree or not.

IEPB: Nuh uh. The two cases are disanalogous. In the gravity case I can only disagree with you on pain of being objectively incorrect. In the knock-it-off case I can disagree with you however I please.

Me: No, you disagree on pain of being quasi-objectively wrong, according to my standard.

IEPB: Oh, come on. Quasi-objectively? By your standard? Really?

Me: Yes; any observer would agree that you'd violated my standard.

IEPB: But that's purely a descriptive claim!

Me: That's the descriptive component, and as a descriptive claim it's objectively correct. The normative claim is that your disagreement and violation mean you're in the wrong, as defined by my disapproval of your behaviour. And that normative claim is subjectively correct.


And at this point I have to break off this made-up conversation, because I don't see what new rebuttal Boghossian could/would give. Here endeth the philosopher fanfiction.

Edit, 4 days later: correct "normative important" misquotation to "normative import".

Comment author: Stabilizer 08 February 2017 12:20:03AM 0 points [-]

Actually, I don't know if you and Boghossian really disagree here. I think Boghossian is trying to argue that your normative preferences arise from your opinions about what the moral facts are. So I think he'd say:

IEPB: "People ought to do X" is your preference because you are assuming "People ought to do X" is a moral fact. It's a different issue whether your assumption is true or false, or justified or unjustified, but the assumption is being made nevertheless.

For example, when you exhort IEPB to not make mediocre philosophy arguments, and say that that's your preference, it's because you are assuming that the claim, "philosophy professors ought not to make mediocre philosophy arguments", is in fact, true.

Comment author: Stabilizer 06 February 2017 07:55:25PM 2 points [-]

True listening requires giving up the prerogative of your own mental model. You have to allow them to set the rules of engagement, no matter how bizarre, so that they let their guard down and realize you are not a threat, because you have no intention of blaming them for anything. The way they set these rules will reveal their assumptions and constraints, which thoughts and actions are open to them. If you can tell an authentic story that speaks to these assumptions, you can break through, because stories speak to emotions expressed in the body, which fortunately refuses to go along with even our most well-reasoned rationalizations.

The price of such effective action is we have to be willing to give up the petty payoffs we cherish in our arguments with each other: not only the blaming but the cynicism, the martyrdom, the self-righteous indignation, the outrage, the winning, the making others lose, the being right, the making others wrong.

And after all that, you may still fail to convince them to your point of view. Are you willing to not win in order to keep the conversation going?

Comment author: Stabilizer 04 February 2017 10:01:09PM *  2 points [-]

The intuitive standard for rational decision-making is carefully considering all available options and taking the best one. At first glance, computers look like the paragons of this approach, grinding their way through complex computations for as long as it takes to get perfect answers. But as we've seen, that is an outdated picture of what computers do: it's a luxury afforded by an easy problem. In the hard cases, the best algorithms are all about doing what makes the most sense in the least amount of time, which by no means involves giving careful consideration to every factor and pursuing every computation to the end. Life is just too complicated for that.

In almost every domain we've considered, we have seen how the more real-world factors we include—whether it's having incomplete information when interviewing job applicants, dealing with a changing world when trying to resolve the explore/exploit dilemma, or having certain tasks depend on others when we're trying to get things done—the more likely we are to end up in a situation where finding the perfect solution takes unreasonably long. And indeed, people are almost always confronting what computer science regards as the hard cases. Up against such hard cases, effective algorithms make assumptions, show a bias toward simpler solutions, trade off the costs of error against the costs of delay, and take chances.

These aren't the concessions we make when we can't be rational. They're what being rational means.

  • Brian Christian and Tom Griffiths, Algorithms to Live By
Comment author: Stabilizer 03 February 2017 08:42:48PM 0 points [-]

If the question, "Which interpretation of quantum mechanics is correct?" is posed to physicists, my guess is that the surprisingly popular opinion would be: the Everett interpretation, which in my opinion – and I consider myself a mild expert in the foundations of QM – is the correct one.

Comment author: Stabilizer 03 February 2017 04:09:33AM *  4 points [-]

In the United States, constructivist views of knowledge are closely linked to such progressive movements as post-colonialism and multiculturalism because they supply the philosophical resources with which to protect oppressed cultures from the charge of holding false or unjustified views.

Even on purely political grounds, however, it is difficult to understand how this could have come to seem a good application of constructivist thought: for if the powerful can’t criticize the oppressed, because the central epistemological categories are inexorably tied to particular perspectives, it also follows that the oppressed can’t criticize the powerful.

View more: Next