Usually, I don't get offended at things that people say to me, because I can see at what points in their argument we differ, and what sort of counterargument I could make to that. I can't get mad at people for having beliefs I think are wrong, since I myself regularly have beliefs that I later realize were wrong. I can't get mad at the idea, either, since either it's a thing that's right, or wrong, and if it's wrong, I have the power to say why. And if it turns out I'm wrong, so be it, I'll adopt new, right beliefs. And so I never got offended about anything.

Until one day.

One day, I encountered a belief that should have been easy to refute. Or, rather, easy to dissect, and see whether there was anything wrong with it, and if there was, formulate a counterargument. But for seemingly no reason at all, it frustrated me to great, great, lengths. My experience was as follows:

I was asking the opinion of a socially progressive friend on what they feel are the founding axioms of social justice, because I was having trouble thinking of them on my own. (They can be derived from any set of fundamental axioms that govern morality, but I wanted something that you could specifically use to describe who is being oppressed, and why.) They seemed to be having trouble understanding what I was saying, and it was hard to get an opinion out of them. They also got angry at me for dismissing Tumblr as a legitmate source of social justice. But eventually we got to the heart of the matter, and I discovered a basic disconnecf between us: they asked, "Wait, you're seriously applying a math thing to social justice?" And I pondered that for a moment and explained that it isn't restricted to math at all, and an axiom in this context can be any belief that you use to base your beliefs on. However, then the true problem came to light (after a comparison of me to misguided 18th-century philosophes): "Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide. I wasn't angry at my friend in particular for having said that. For the first time, I was angry at an idea: that belief systems about certain things should not be internally consistent, should not follow logical rules. It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.

I'm glad that I encountered that belief, though, like all beliefs, since I was able to solve it in the end, and make peace with it. I came to the following conclusions:

  1. In order to make a rationalist extremely aggravated, you can tell them that you don't think that belief structures should be internally logically consistent. (After 12-24 hours, they acquire lifetime immunity to this trick.)
  2. Belief structures do not necessarily have to be internally logically consistent. However, consistent systems are better, for the following reason: belief systems are used for deriving actions to take. Many actions that are oriented towards the same goal will make progress in accomplishing that goal. Making progress in accomplishing goals is a desirable thing. An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress. Therefore, assuming the first few statements, having an internally consistent belief system is desirable! Having reduced it to an epistemological problem (do people really desire progress? can actions actually accomplish things?), I now only have epistemological anarchism to deal with, which seems to work less well in practice than the scientific method, so I can ignore it.
  3. No matter how offended you are about something, thinking about it will still resolve the issue.
Does anyone have anything to add to this? Did I miss any sort of deeper reasons I could be using for this? Granted, my solution only works if you want to accomplish goals, and use your belief system to generate actions to accomplish goals, but I think that's fairly universal.

New Comment
113 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

I don't understand what "this stuff" refers to in this sentence and it is far from clear to me that your interpretation of what your friend said is correct.

I also don't think it's a good idea to take an axiomatic approach to something like social justice. This approach:

Edit: Also, a general comment. Suppose you think that the optimal algorithm for solving a problem is X. It does not follow that making your algorithm look more like X will make it a better algorithm. X may have many essential parts, and making your algorithm look more like X by imitating some but not all of its essential parts may make it much worse than it was initially. In fact, a reasonably efficient algorithm which is reasonably good at solving the problem may look nothing like X.

T... (read more)

4Error
This comment causes me unpleasant cognitive dissonance. I read the second party's statement as meaning something like "no, logical rigor is out of place in this subject, and that's how it should be." And I find that attitude, if not offensive, at least incredibly irritating and wrongheaded. And yet I recognize that your argument has merit and I may need to update. I state this not so much because I have something useful to say for or against it, but to force it out of my head so I can't pretend the conflict isn't there.
0Qiaochu_Yuan
See this comment for a steelmanning of what I think the friend's point of view is.
[-]Emile300

Reminds me of an comment by pjeby (holy cow, 100 upvotes!) in an old thread:

One of the things that I've noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think "guessing the teacher's password", but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science.

...

3Eugine_Nier
I had the opposite problem, for a while I divided the world (or at least mathematics) into two categories, stuff I understand and stuff I will understand later. It was a big shock when I realized that for most things this wasn't going to happen.
3ikrase
When you combine that with a mistrust for logically-consistent thinking that's burned them in the past, you get a MESS.
0byrnema
P̶j̶e̶b̶y̶ ̶a̶l̶s̶o̶ ̶h̶a̶s̶ ̶a̶ ̶r̶e̶a̶l̶l̶y̶ ̶g̶o̶o̶d̶ ̶p̶o̶s̶t̶ ̶a̶b̶o̶u̶t̶ ̶f̶i̶g̶u̶r̶i̶n̶g̶ ̶o̶u̶t̶ ̶w̶h̶y̶ ̶s̶o̶m̶e̶t̶h̶i̶n̶g̶ ̶o̶f̶f̶e̶n̶d̶s̶ ̶y̶o̶u̶.̶ ̶I̶'̶l̶l̶ ̶h̶u̶n̶t̶ ̶i̶t̶ ̶d̶o̶w̶n̶ ̶w̶h̶e̶n̶ ̶I̶ ̶g̶e̶t̶ ̶b̶a̶c̶k̶ ̶i̶f̶ ̶s̶o̶m̶e̶o̶n̶e̶ ̶e̶l̶s̶e̶ ̶d̶o̶e̶s̶n̶'̶t̶ ̶f̶i̶n̶d̶ ̶i̶t̶ ̶f̶i̶r̶s̶t̶.̶ (Perhaps Yvain, rather, and I can't find it.) Found it. Was harder to find because I remembered it as a post but actually it was a comment.
0pjeby
Out of curiosity, did you ever have occasion to use the advice in that comment, and if so, what was the result?
4byrnema
What I mainly took from your post was the need to identify the particular norm being violated each time I'm angry/offended. I've found (2 or 3 examples come to mind) that it really helps to do this, especially if the anger seems to keep simmering without progress. It does typically take a few tries, to identify what I'm really upset about, but after I identify the reason, there is finally resolution because (i) I find that I finally agree with myself (the self-validation seems to be very important step for moving on, I no longer feel a need to petulantly keep defending myself by protesting) and (ii) I usually find that my anger was a little bit misdirected or not appropriate for the context. In any case, I'm able to let it go. I'm often surprised how primitive the 'norm' is that I felt was violated. Typically for me it's a basic need for love and acceptance that isn't being met (which seems strange when I'm a grown, independent adult). The most recent example is that I was offended/upset by a critical remark by a health technician who matter-of-fact told me I needed to do something differently. Of course there was the initial sting of being criticized, but I was disproportionately angry. At first I thought I was upset because she "wasn't being professional" about other peripheral things, which is the first argument that came to mind because that's what people tend to say, and also mentally attacking her relatively lower level of education compared to the doctor was distracting me from identifying the real reason. It took a while, but I discovered I was upset because I wanted her to be loving and supporting, because I've been putting a lot of effort in this aspect of my health. As soon as I realized I was looking for positive feedback for my efforts I (i) agreed with myself, it is true I ought to receive positive feedback for my efforts if I'm going to succeed in this and (ii) realized my anger was misdirected; it would have been nice if there was some support com
2pjeby
Thanks for the reply. It's not that strange at all, actually. It's quite common for us to not learn how to take care of our own emotional needs as children. And in my case at least, it's been taking me a great deal of study to learn how to do it now. There are quite a lot of non-intuitive things about it, including the part where getting other people to love and accept you doesn't actually help, unless you're trying to use it as an example. To put it another way, we don't have emotional problems because we didn't get "enough" love as kids, but because we didn't get enough examples of how to treat ourselves in a loving way, e.g. to approach our own thoughts and feelings with kindness instead of pushing them away or invalidating them (or whatever else we got as an example). Or to put it yet another way, this is a matter of "emotional intelligence" being far more about nurture than nature. But now I'm babbling. Anyway, from the rest of what you describe, you sound like you've actually got better skills than me in the area of the actual "taking care of your needs" part, so I wouldn't worry about it. I'm glad the specific tip about norm violations helped. Those are one of those things that our brains seem to do just out of conscious awareness, like "lost purposes", that you sort of have to explicitly ask yourself in order to do anything about the automatic reaction. It also helps to get rid of the norm or expectation itself, if it's not a reasonable one. For example, expecting all of your colleagues to always treat you with love and acceptance might not be realistic, in which case "upgrading an addiction to a preference" (replacing the shoulds with like/prefer statements) can be helpful in preventing the need to keep running round the "get offended, figure out what's happening, address the specifics" loop every single time. If you stop expecting and start preferring, the anger or sense of offense doesn't arise in the first place.
0prase
Out of curiosity, how did you make the strikethrough line which extends far to the right outside the comment box?
0byrnema
I used the tool on this webpage. It appears it added underscores between each letter... but the underscores are actually part of the font, I think. e x a m p l e (with spaces)

Belief structures do not necessarily have to be internally logically consistent. However, consistent systems are better, for the following reason: belief systems are used for deriving actions to take.

I have a working hypotheses that most evil (from otherwise well-intentioned people) comes from forcing a very complex, context-dependent moral system into one that is "consistent" (i.e., defined by necessarily overly simplified rules that are global rather than context-dependent) and then committing to that system even in doubtful cases since it seems better that it be consistent.

(There's no problem with looking for consistent rules or wanting consistent rules, the problem is settling on a system too early and applying or acting on insufficient, inadequate rules.)

Eliezer has written that religion can be an 'off-switch' for intuitively knowing what is moral ... religion is the common example of any ideology that a person can allow to trump their intuition in deciding how to act. My pet example is, while I generally approve of the values of the religion I was brought up with, you can always find specific contexts (its not too difficult, actually) where their decided rul... (read more)

A comment from another perspective. To be blunt, I don't think you understand why you got upset. (I'm not trying to single you out here; I also frequently don't understand why I am upset.) Your analysis of the situation focuses too much on the semantic content of the conversation and ignores a whole host of other potentially relevant factors, e.g. your blood sugar, your friend's body language, your friend's tone of voice, what other things happened that day that might have upset you, etc.

My current understanding of the way emotions work is something like this: first you feel an emotion, then your brain guesses a reason why you feel that emotion. Your brain is not necessarily right when it does this. This is why people watch horror movies on dates (first your date feels an intense feeling caused by the horror movie, then hopefully your date misinterprets it as nervousness caused by attraction instead of fear). Introspection is unreliable.

When you introspected for a reason why you were upset, you settled on "I was upset because my friend was being so irrational" too quickly. This is an explanation that indicates you weren't trying very hard to explicitly model what was going on in your friend's head. Remember, your friend is not an evil mutant. The things they say make sense to them.

3mszegedy
It took me the whole day to figure even that out, really. Stress from other sources was definitely a factor, but what I observed is, whenever I thought about that idea, I got very angry, and got sudden urges to throw heavy things. When I didn't, I was less angry. I concluded later that I was angry at the idea. I wasn't sure why (I'm still not completely sure: why would I get angry at an idea, even if it was something that was truly impossible to argue against? a completely irrefutable idea is a very special one; I guess it was the fact that the implications of it being right weren't present in reality), but it seemed that the idea was making me angry, so I used the general strategy of feeling the idea for any weak points, and seeing whether I could substitute something more logical for inferences, and more likely for assumptions. Which is how I arrived at my conclusions.
4Qiaochu_Yuan
Thanks for the explanation. I still think it is more likely that you got angry at, for example, your friend's dismissive attitude, and thinking about the idea reminded you of it. You are a human, and humans get angry for a lot of reasons, e.g. when other humans challenge their core beliefs. 1) I don't think your friend's point of view is impossible to argue against (as I mentioned in my other comment you can argue based on results), 2) it's not obvious to me that you've correctly understood your friend's point of view, 3) I still think you are focusing too much on the semantic content of the conversation.
0mszegedy
I'm talking hypothetically. I did allow myself to consider the possibility that the idea was not perfect. Actually, I assumed that until I could prove otherwise. It just seemed pretty hopeless, so I'm considering the extreme. Maybe not. I'm not angry at my friend at all, nor was I before. I felt sort of betrayed, but my friend had reasons for thinking things. If (I think) the things or reasons are wrong, I can tell my friend, and then they'll maybe respond, and if they don't, then it's good enough for me that I have a reasonable interpretation of their argument, unless it is going to hurt them that they hold what I believe to be a wrong belief. Then there's a problem. But I haven't encountered that yet. But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally, than to try and argue against the idea that everybody who had tried to apply rationality to society had it wrong, which is a very long battle that needs to be fought using history books and citations. Then what else should I focus on? I like having my beliefs challenged, though. That's what makes me a rationalist in the first place. Though, I have thought of an alternate hypothesis for why I was offended. My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society. And maybe that offended me. Just because I was like them in that I was trying to apply rationality to society (which I had rational reasons for doing), I was as bad as a white supremacist. Again, I can't be mad at my friend, since that's just a belief they hold, and beliefs can change, or be justified. My friend had reasons for holding that belief, and it hadn't caused any harm to anybody. But maybe that was what was so offensive? That sounds at least equally likely

But the point is that it, to me, is much more interesting/useful/not tedious to consider this idea that challenges rationality very fundamentally

This is what I mean when I say I don't think you've correctly understood your friend's point of view. Here is a steelmanning of what I imagine your friend's point of view to be that has nothing to do with challenging rationality:

"Different domain experts use different kinds of frameworks for understanding their domains. Taking the outside view, someone who claims that a framework used in domain X is more appropriate for use in domain Y than what Y-experts themselves use is probably wrong, especially if X and Y are very different, and it would take a substantial amount of evidence to convince me otherwise. In the particular case that X = mathematics and Y = social justice, it seems like applying the methods of X to Y risks drastically oversimplifying the phenomena in Y."

My friend compared me to white supremacist philosophes from the early days of the Enlightenment. And when I said that I did not share their ideas, my friend said that it was not because of my ideas, but because I was trying to apply rationality to society.

Yo... (read more)

0ChristianKl
Why? They argue about whether it makes sense to base your moral philosophy in axioms and then logically deduce conclusions. There are plenty of people out there who disagree with that way of doing things.
5Qiaochu_Yuan
When you say the word "rationality" to most people they are going to round it to the nearest common cliche, which is Spock-style thinking where you pretend that nobody has emotions and so forth. There's a nontrivial inferential gap that needs to be closed before you, as an LWer can be sure that a person understand what LW means by "rationality."
0ChristianKl
I think you are making a mistake when you assume that the position that mszegedy argues is just LW-style rationality. mszegedy argued with his friend about using axiom based reasoning, where you start with axioms and then logically deduce your conclusions.
0Qiaochu_Yuan
I think the word rationality was also relevant to the argument. From one of mszegedy's comments:
0ChristianKl
You make a mistake when you assume rationality to mean LW-style rationality. That's not what they argued about. When mszegedy's friend accused him of applying rationality to society he refered to mszegedy's argument that one should base social justice on axioms. According to him the problem with the white supremacist isn't that they choose the wrong aximons but that they focused on the axioms in the first place. They were rationalists of the englishment who had absolute confidence in their belief that certain things are right by axiom and other are wrong. LW-style rationality allows the conclusion: "Rationality is about winning. Groups that based their moral philosophy on strong axioms didn't win. It's not rational to base your moral philosophy on strong axioms." Mszegedy's friend got him into a situation where he had no rational argument why he shouldn't draw that conclusion. He is emotionally repulsed by that conclusion. Mszegedy is emotionally attached to an enlightment ideal of rationality where you care about deducing your conclusions from proper axioms in an internally consistent way instead of just caring about winning.
0mszegedy
Oh, okay. That makes sense. So then what's the rational thing to conclude at this point? I'm not going to go back and argue with my friend—they've had enough of it. But what can I take away from this, then? (I was using the French term philosophe, not omitting a letter, though. That's how my history book used to write it, anyway.)
1Qiaochu_Yuan
I've mentioned various possible takeaways in my other comments. A specific thing you could do differently in the future is to practice releasing againstness during arguments.
1ChristianKl
Humans are emotional creatures. We don't feel emotions for rational reasons. The emotion you felt is called cognitive dissonance. It's something that humans feel when they come to a point where one of their fundemental beliefs is threatened but they don't have good arguments to back them up. I think it's quite valuable to have a strong reference experience of what cognitive dissonance feels like. It's make it easier to recognize the feeling when you feel it in the future. Whenever you are feeling that feeling, take note of the beliefs in question and examine them more deeply in writing when you are at home.
1bsterrett
I was recently reflecting on an argument I had with someone where they expressed an idea to me that made me very frustrated, though I don't think I was as angry as you described yourself after your own argument. I judged them to be making a very basic mistake of rationality and I was trying to help them to not make the mistake. Their response implied that they didn't think they had executed a flawed mental process like I had accused them of, and even if they had executed a mental process like the one I described, it would not necessarily be a mistake. In the moment, I took this response to be a complete rejection of rationality (or something like that), and I became slightly angry and very frustrated. I realized afterwards that a big part of what upset me was that I was trying to do something that I felt would be helpful to this person and everyone around them and possibly the world at large, yet they were rejecting it for no reason that I could identify in the moment. (I know that my pushiness about rationality can make the world at large worse instead of better, but this was not on my mind in the moment.) I was thinking of myself as being charitable and nice, and I was thinking of them as inexplicably not receptive. On top of this, I had failed to liaise even decently on behalf of rationalists, and I had possibly turned this person off to the study of rationality. I think these things upset me more than I ever could have realized while the argument was still going on. Perhaps you felt some of this as well? I don't expect these considerations to account for all of the emotions you felt, but I would be surprised if they were totally uninvolved.
2OrphanWilde
Do people's brains actually work this way? Other people's, I should say, because mine certainly doesn't.
3Qiaochu_Yuan
What are you referring to by "this way"?
4OrphanWilde
"first you feel an emotion, then your brain guesses a reason why you feel that emotion" To explain why this is completely alien to me: First, I rarely even notice emotions. To say I feel them would be stretching the concept of "feel" quite considerably. "Observe" would be closer to my relationship with my emotions. (Except in the case of -extremely- intense emotions, anyways; it's kind of like a fire a hundred yards away; I can see it when it's over there if I look in its direction, and only feel it when it's really quite significant) Second, I don't have any kind of... pointer, where I can automatically identify where an emotion came from; my brain isn't performing such a function at all. I also haven't noticed in my relationships any indications that people do have any kind of intuitions about where their emotions come from. Indeed, it's my experience that a lot of other people also don't have any kind of direct knowledge of their own emotional state except in extreme situations, much less any intuitions about where that emotional state arises from. If we did have either of these things, I'd expect things like depression wouldn't go unnoticed by so many people for so long.
5pcm
There's a lot of variation in how aware people are of their emotions. You might want to look into Alexithymia.
0[anonymous]
I wrote a poem about the phenomenon: Vague feeling inside Whatever you interpret it as You will feel Is it depression? Anxiety? Happiness? Self fulfiling prophecy CBT............
3Qiaochu_Yuan
I'm not inside your brain or your friend's brains, but that doesn't sound typical to me.
[-]Duncan200

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

Let me translate: "You should do what I say because I said so." This is an attempt to overpower you and is quite common. Anyone who insists that you accept their belief without logical justification is simply demanding that you do what they say because they say so. My response, to people who can be reasoned with, is often just to point this out and point out that it is extremely offensive. If they cannot be reasoned with then you just have to play the political game humans have been playing for ages.

A more charitable translation would be "I strongly disagree with you and have not yet been able to formulate a coherent explanation for my objection, so I'll start off simply stating my disagreement." Helping them state their argument would be a much more constructive response than confronting them for not giving an argument initially.

0Duncan
It is not as much that they haven't given an argument or stated their position. It is that they are telling you (forcefully) WHAT to do without any justification. From what I can tell of the OP's conversation this person has decided to stop discussing the matter and gone straight to telling the OP what to do. In my experience, when a conversation reaches that point, the other person needs to be made aware of what they are doing (politely if possible - assuming the discussion hasn't reached a dead end, which is often the case). It is very human and tempting to rush to the 'Are you crazy?!! You should __.' and skip all the hard thinking.
1AlexMennen
It sounds like the generic "you" to me. So "you shouldn't apply this stuff to society" means "people shouldn't apply this stuff to society." I don't see anything objectionable about statements like that.
[-][anonymous]100

Let me offer a different translation: "You are proposing something that is profoundly inhuman to my sensibilities and is likely to have bad outcomes."

Rukifellth below has, I think, a much more likely reason for the reaction presented.

0Duncan
Given the 'Sorry if it offends you' and the 'Like... no' I think your translation is in error. When a person says either of those things they are A. saying I no longer care about keeping this discussion civil/cordial and B. I am firmly behind (insert their position here). What you have written is much more civil and makes no demands on the other party as opposed to what they said "... you should ...." That being said, it is often better to be more diplomatic. However, letting someone walk all over you isn't good either.
0AlexMennen
"Like..." = "I'm about to explain myself, but need a filler word to give myself more time to formulate the sentence." "no" = "whoops, couldn't think of what to say quick enough to avoid an awkwardly long pause; I'd better tie off that sentence I just suggested I was about to start." I'm not quite sure what to make of "Sorry if it offends you", but I don't see how you can get from there to "I'm not even trying to be polite."
0ChristianKl
Their conversation was longer than one sentence. If his discussion partner wouldn't have backed up his point in any way, I doubt mszegedy would have felt enough cognitive dissonance to contemplated suicide. "You should do what I say because I said so.", generally doesn't make people feel cognitive dissonance that's that strong.

If you really contemplated suicide over this subject, I am afraid to discuss it with you.

6mszegedy
Oh. Well, that was a while ago, and I get over that stuff quickly. Very few people have that power over me, anyway; they were one of the only friends I had, and it was extremely unusual behavior foming from them. It was kind of devastating to me that there was a thought that was directed at me by a trusted source that was negative and I couldn't explain... but I could, so now I'm all the more confident. This is a success story! I've historically never actually committed sucide, and it was a combination of other stress factors as well that produced that response. I doubt that I actually would, in part because I have no painless means of doing so: when I actually contemplate the action, it's just logistically impossible to do the way I like. I've also gotten real good at talking myself out of it. Usually it's out of a "that'll show 'em" attitude, which I recognize immediately, and also recognize that that would be both cruel and a detriment to society. So, I appreciate your concern for me a lot, but I don't think I'm in any danger of dying at all. Thanks a lot for caring, though!

Tetlock's foxes vs. hedgehogs (people without strong ideologies are somewhat better predictors than those who have strong ideologies, though still not very good predictors) suggests that a hunt for consistency in for something as complex as politics leads to an excessively high risk of ignoring evidence.

Hedgehogs might have premises about how to learn more than about specific outcomes.

[-]Shmi160

I suspect that what frustrated you is not noticing your own confusion. You clearly had a case of lost purposes: "applying a math thing to social justice" is instrumental, not terminal. You discovered a belief "applying math is always a good thing" which is not obviously connected to your terminal goal "social justice is a good thing".

You are rationalizing your belief about applying math in your point 2:

An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.

How do you know that? Seems like an argument you have invented on the spot to justify your entrenched position. Your point 3 confirms it:

No matter how offended you are about something, thinking about it will still resolve the issue.

In other words, you resolved your cognitive dissonance by believing the argument you invented, without any updating.

If you feel like thinking about the issue some more, consider connecting your floating belief "math is good" ... (read more)

3mszegedy
You're completely right. I tried, at first, to look for ways that it could be a true statement that "some areas shouldn't have consistent belief systems attached", but that made me upset or something (wtf, me?), so I abandoned that, and resolved to attack the argument, and accept it if I couldn't find a fault with it. And that's clearly bad practice for a self-proclaimed rarionalist! I'm ashamed. Well, I can sort of make the excuse of having experienced emotions, which made me forget my principles, but that's definitely not good enough. I will be more careful next time. EDIT: Actually, I'm not sure whether it's so cut-and-dry like that. I'll admit that I ended up rationalizing, but it's not as simple as "didn't notice confusion". I definitely did notice it. Just when I am presented with an opposing argument, what I'll do is that I'll try to figure out at what points it contradicts my own beliefs. Then I'll see whether those beliefs are well-founded. If they aren't, I'll throw them out and attempt to form new ones, adopting the foreign argument in the process. If I find that the beliefs it contradicts are well-founded, then I'll say that the argument is wrong because it contradicts these particular beliefs of mine. Then I'll go up to the other person and tell them where it contradicts my beliefs, and it will repeat until one of us can't justify our beliefs, or we find that we have contradictory basic assumptions. That is what I did here, too; I just failed to examine my beliefs closely enough, and ended up rationalizing as a result. Is this the wrong way to go about things? There's of course a lot to be said about actual beliefs about reality in terms of prior probability and such, so that can also be taken into account where it applies. But this was a mostly abstract argument, so that didn't apply, until I introduced an epistemological argument instead. But, so, is my whole process flawed? Or did I just misstep?
3Shmi
From your original story, it doesn't look like you have noticed that your cached belief was floating. Presumably it's a one-off event for you, and the next time you feel frustrated like that, you will know what to look for. Now, I am not a rationalist (IANAR?), I just sort of hang out here for fun, so I am probably not the best person to ask about methodology. That said, one of the approaches I have seen here and liked is steelmanning the opposing argument to the point where you can state it better than the the person you are arguing with. Then you can examine it without the need to "win" (now it's your argument, not theirs) and separate the parts that work from those which don't. And, in my experience, there is a grain of truth in almost every argument, so it's rarely a wasted effort.
1Kawoomba
Very insightful, that.
3Qiaochu_Yuan
Agreed. Many people can act effectively starting from what might be regarded as inconsistent belief systems by compartmentalizing (e.g. religious scientists). There is also an underlying assumption in the post that beliefs are logical statements with truth values that is questionable. Many beliefs are probably "not even wrong."
1Giles
Remember you have to make a convincing case without using stuff like logic
2Shmi
Hence what I said, start with something they both can agree on, like whether making accurate models of reality is important for effective social justice.

An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.

One way to model willpower is that it is a muscle that uses up brain energy to accomplish things. This is a common model but it is not my current working hypothesis for how things "really universally work in human brains". Rather, I see a need for "that which people vaguely gesture towards with the word willpower" as a sign that a person's total cognitive makeup contains inconsistent elements that are destructively interfering with each other. In other words, the argument against logically coherent beliefs is sort of an argument in favor of akrasia.

Some people seem to have a standard response to this idea that is consonant with the slogan "that which can be destroyed by the truth should be" and this is generally not my preferred response except as a fallback in cases of a poverty of alternative options. The problem I have with "destroy my akrasia with the tr... (read more)

0[anonymous]
trying to delete

I sympathize, but I down voted this post.

this is a personal story and a generalization from one person's experience. I think that as a category, that's not enouph for a post on its own. It might be fine as a comment in an open thread or other less prominently placed content.

[-]V_V120

And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide. I wasn't angry at my friend in particular for having said that. For the first time, I was angry at an idea: that belief systems about certain things should not be internally consistent, should not follow logical rules.

This emotional reaction seems abnormal. Seriously, somebody says something confusing and you contemplate suicide?
What are you, a Straw Vulcan computer that can be disabled with a Logic Bomb ?

Unless you are making this up, I suggest you consider seeking professional help.

It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.

Actually, it's rather easy: just tell them that ex falso quodlibet.

2Nisan
That's mean.
5V_V
What?
0mszegedy
True, I swear! I think I can summarize why I was so distraught: external factors, this was a trusted friend, also one of my only friends, and I was offended by related things they had said prior. I am seeking help, though.
2V_V
That makes more sense. As a general rule, however, I suggest trying to avoid to take personal offense at contrary opinions of other, expecially when discussing philosophical issues.
[-]Tenoke120

And that did it. For the rest of the day, I wreaked physical havoc, and emotionally alienated everyone I interacted with. I even seriously contemplated suicide.

You never get offended but this little thing brought you on the verge of suicide!? Did you recently become a rationalist? I am not sure how to read the situation.

1[anonymous]
A tricky problem is, you can't really read the situation from a brief description. Here is an example of increasingly suicidality to show why: Monday: "Today was horrible... just horrible. I can't take this any more, I'm going to end it all." Tuesday: "I am going to walk to that cliff near my house and jump off, that would do it. That would definitely be fatal." and then not doing anything or: Wednesday: "Okay, I have a list of things I'm going to do before jumping off the cliff. Step 1, Eat a large meal." eats "Step 2: Write a Suicide Note:" types "In retrospect... I don't feel like jumping off the cliff anymore today." (Deletes note) Thursday: Doing all of the above, actually walking to that cliff near your house, looking over the edge and only then thinking "You know, maybe I shouldn't jump. Not today. Maybe I'll jump if tomorrow is this bad too." Friday: Standing on the edge as previously, but doing so until one of your friends finds you and pulls you away while you are saying "No, let me go, I need to do this!" I have no idea what serious contemplation refers to (I'm assuming the verge would be either Thursday or Friday.) For instance, even in the past, on my worst days of depression, I don't think I've ever gotten past Wednesday on the list above. If there is a more explicit metric for this, please let me know, I'm not finding one, and it would be great to have an easier way of communicating about some of this.
0Tenoke
Well, thanks for the distinction between suicidal intentions but I don't see this to be really relevant to what I said. In this example 'on the verge of suicide' referred to: Seriously contemplating something is semi-synonymous with being on the verge of doing something. I can't really help you decipher how suicidal he was but if I had to guess he was just exaggerating.
8[anonymous]
Sorry about that. I was trying to break something that seemed unclear into concrete examples, but on looking at it again, I think it may have been a bit too much armchair psychology, and when I tried explaining what I was saying, my explanation sounded even more like armchair psychology (but this time I noticed before posting). Thank you for helping me see that problem more clearly.
0magfrump
From my perspective, Tuesday would feel like "seriously contemplating" from the inside; even late Monday night could too I think. So I disagree with the quoted sentence. EDIT addition for clarity: Had I personally felt like the "Tuesday" scenario described above, I could easily imagine myself describing the event as "seriously contemplating suicide," regardless of what other people think about the definition of "seriously contemplate." So it seems wise to me not to dismiss the possibility that when someone described their situation, it may be less serious than you personally think should be the definition of those words.

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

I'm making a wild guess, but possibly it's the bold part that offended you... Because this is usually what irritates me (Yay for lovely generalization from one example... but see also Emile's comment quoting pjeby's comment).

Similar offenders are:

  • "Come on, it's obvious!"
  • "You can't seriously mean that /Are you playing dumb?"
  • "Because everybody knows that!"

In general, what irritates me is the refusal to really discuss the subject, and the quick dismissal. If arguments are soldiers, this is like building a foxhole and declare you won't move from there at any cost.

4Ben Pace
"I mean, have you heard of cri... cry... cryonics? Hehe..." "Yeah, I'm interested in it." "...Like... no." From conversation today.
0whowhowho
You left out:- * It's below my pay grade.. * It's not important enough.. * We can't justify the resources to answer that..

"Sorry if it offends you, I just don't think in general that you should apply this stuff to society. Like... no."

I felt offended reading this, even though I was expecting something along these lines and was determined not to be offended. I've come to interpret this feeling, on a 5-second level, as "Uh oh, someone's attacking my group." I'm sure I'd be a little flustered if someone said that to me in conversation. But after some time to think about it, I think my response would be "Why shouldn't math be applied to social justice?... (read more)

0mszegedy
Well, the friend had counterexamples to "math as a basis for society is good". I sort of skipped over that. They mentioned those who rationalized bad things like racism, and also Engels. (We both agree that communism is not a successful philosophy.) Counterexamples aren't really enough to dismiss an idea unless they're stronger than the evidence that the idea is good, but I couldn't think of such evidence at the time, and I still can't think of anything particularly convincing. There's no successful society to point at that derived all of its laws and givernment axiomatically.
9Luke_A_Somers
Those are good examples that you need to be really careful applying math to society. If you come up with a short list of axioms for a social group, and then use them to formulate policy, you're probably going to end up leaving the domain over which those axioms are valid. If you have a lot of power, this can be a really bad thing.

Almost no one these days regards axiom compiling as a way of describing emotional phenomenon, such as altruism. The idea of describing such warm reasons in terms of axioms was so unintuitive that it caused your friend to think that you were looking for some other reason for social justice, other than a basic appeal to better human nature. He may have been disgusted at what he thought was an implicit disregard for the more altruistic reasons for social justice, as if they weren't themselves sufficient to do good things.

1) I think your reaction to this situation owed itself more to your psychological peculiarities as a person (whichever they are) than to a characteristic that all people that identify as rationalists share. There's no reason to expect people with the same beliefs as yours never to keep their cool (at least never on the first time) when talking to someone with an obviously incompatible belief system.

2)

It was extremely difficult to construct an argument against, because all of my arguments had logically consistent bases, and were thus invalid in its face.

... (read more)
2B_For_Bandana
Why is privilege such a dangerous idea? I suspect that your answer is along the lines of "A main tenet of privilege theory is that privileged people do not understand how society really works (they don't experience discrimination, etc.), therefore it can make you despair of ever figuring anything out, and this is harmful." But reading about cognitive biases can have a similar effect. Why is learning about bias due to privilege especially harmful to your cognitive toolbox?
8Dahlen
No, it's not that. It's that there are many bugs of the human mind which identity politics inadvertently exploits. For one, there's the fact that it provides convenient ingroups / outgroups for people to feel good, respectively bad, about -- the privileged and the oppressed -- and these outgroups are based on innate characteristics. Being non-white, female, gay etc. wins you points with the social justice crowd just as being white, male, straight etc. loses you points. Socially speaking, how much a "social justice warrior" likes you is partly a function of how many disadvantaged groups you belong to. This shouldn't happen, maybe not even in accordance to the more academic, theoretical side of social justice, but it does, because we're running on corrupted hardware and these theories fail to compensate for it. Another very closely related problem is "collecting injustices". You can transform everything bad that happens to you for a cause that you perceive to be your belonging to an oppressed group into debate ammunition against the other; you can point to it to put yourself in a positive, sympathetic, morally superior light, and your opponents in a negative light. So there's this powerful rhetorical upside to being in a situation that otherwise can only be seen as a very shitty situation to be in. This incentivizes people, on some level, to not really seek to minimize these situations. But obviously people hate oppression and don't actually, honestly want to experience it, but winning debates automatically and gaining the right to pontificate feels good. So what to do? Lower the threshold for what counts as oppression, obviously. This has absolutely disastrous effects on their stated goals. If there's anything whatsoever that incentivizes you to find more oppression in the world around you, you can't sincerely pursue the goal of ending oppression. Also, some of the local memes instruct people to lift all the responsibility of a civilized discussion off themselves a
0[anonymous]
Accidentally retracted because I can't into formatting; please ignore and see the other post in this subthread. --Dahlen

I usually turn to the Principle of Explosion to explain why one should have core axioms in their ethics, (specifically non-contradictory axioms). If some principle you use in deciding what is or is not ethical creates a contradiction, you can justify any action on the basis of that contradiction. If the axioms aren't explicit, the chance of a hidden contradiction is higher. The idea that every action could be ethically justified is something that very few people will accept, so explaining this usually helps.

I try to understand that thinking this way is odd... (read more)

From your strong reaction I would guess that your friend's reaction somehow ruined the model of the world you had, in a way that was connected with your life goals. Therefore for some time your life goals seemed unattainable and the whole life meaningless. But gradually you found a way to connect your life goals with the new model.

Seems to me that your conclusion #2 is too abstract ("far") for a shock that I think had personal ("near") aspects. You write impersonal abstractions -- "do people really desire progress? can actions actu... (read more)

I would add to this that if the domain of discourse is one where we start out with a set of intuitive rules, as is the case for many of the kinds of real-world situations that "social justice" theories try to make statements about, there are two basic ways to arrive at a logically consistent belief structure: we can start from broad general axioms and reason forward to more specific rules (as you did with your friend), or we can start from our intuitions about specific cases and reason backward to general principles.

IME, when I try to reason-for... (read more)

2TheOtherDave
It occurs to me that I can express this thought more concisely in local jargon by saying that any system which seeks to optimize a domain for a set of fixed values that do not align with what humans collectively value today is unFriendly.

It seems possible that when your friend said, in effect, that there can never be any axioms for social justice, what they really meant was simply, "I don't know the axioms either." That would indeed be a map/territory confusion on their part, but it's a pretty common and understandable one. The statement, "Flying machines are impossible" is not equivalent to "I don't know how to build a flying machine," but in the short term they are making the same prediction: no one is flying anywhere today.

Actually, and I don't know if you'... (read more)

0mszegedy
They seemed to be saying both things. Hah, that's true! I didn't think of it that way. I don't know that much about the Friendly AI problem, so I wouldn't know anyway. I've been able to reduce my entire morality to two axioms, though (which probably aren't especially suitable for AI or a 100% rational person, because there's no possibility at all that I've actually found a solution to a problem I know nothing about that has been considered by many educated people for long periods of time), so I thought that maybe you could find something similar for social justice (I was having trouble deciding on what to feel about certain fringe cases).
1B_For_Bandana
My point was that they probably did think they meant both things, because the distinction between "it's impossible" and "I don't know how" is not really clear in their mind. But that is not as alarming as it would be coming from someone who did know the difference, and insisted that they really did mean "impossible." Okay, I'll bite. What are they?
0mszegedy
Hmm, I agree, but I don't think that it adequately explains the entire picture. I think it might have been two different ideas coming from two different sources. I can imagine that my friend had absorbed "applying formalized reason to society is bad" from popular culture, whereas "I don't know what founding propositions of social justice are", and subsequently "there might not be able to be such things" (like you talked about), came from their own internal evaluations. I kinda wanted to avoid this because social approval etc., also brevity, but okay: 1. Everybody is completely, equally, and infinitely entitled to life, positive feelings, and a lack of negative feelings. 2. One must forfeit gratification of axiom 1 to help others to achieve it. (This might be badly worded. What I mean is that you also have to consider the entitlement of others as well to etc etc etc in their actions, and while others are do not have the things in axiom 1, one should be helping them get them, not oneself.) I know it loses a lot of nuance this way (to what extent must you help others? well, so that it works out optimally for everyone; but what exactly is optimal? the sum of everyone's life/positive feelings/lack of negative feelings? that's left undefined), but it works for me, at least.
6Qiaochu_Yuan
I think it is deeply misleading to label these "axioms." At best these are summaries of heuristics that you use (or believe you use) to make moral decisions. You couldn't feed these axioms into a computer and get moral behavior back out. Have you read the posts orbiting around Fake Fake Utility Functions?
4Richard_Kennaway
(axioms omitted) I don't see any mathematics there, and making them into mathematics looks to me like an AI-complete problem. What do you do with these axioms?
1Eugine_Nier
What do you mean by "positive feelings"? For example, would you support wireheading everyone?
0mszegedy
That's exactly what I can't make my mind up about, and forces me to default to nihilism on things like that. Maybe it really is irrelevant where the pleasure comes from? If we did wirehead everyone for eternity, then would it be sad if everyone spontaneously disappeared at some point? Those are questions that I can't answer. My morality is only good for today's society, not tomorrow's. I guess strictly morally, yes, wireheading is a solution, but philosophically, there are arguments to be made against it. (Not from a nihilistic point of view, though, which I am not comfortable with. I guess, philosophically, I can adopt two axioms: "Life requires meaning," and "meaning must be created." And then arises the question, "What is meaning?", at which point I leave it to people with real degrees in philosophy. If you asked me, I'd try to relate it to the entropy of the universe somehow. But I feel that I'm really out of my depth at that point.)
4Qiaochu_Yuan
I think you're giving up too early. Have you read the metaethics sequence?

Perhaps I'm mistaken about this, but isn't a far stronger argument in favor of a consistent belief system the fact that with inconsistent axioms you can derive any result you want? In an inconsistent belief system you can rationalize away any act you intend to take, and in fact this has often been seen throughout history.

5Kawoomba
In theory, yes. In practice, ... maybe. Like saying "a human can implement a bounded TM and can in principle, without tools other than paper&pencil, compute a prime number with a million digits". It depends on how inconsistent the axioms are in practice. If the contradictions are minor, before leveraging that contradiction to derive arbitrary results, the hu-man may die of old age.
4Eugine_Nier
Of course, if the belief system in questions becomes popular, one of his disciples may wind up doing this.
4Larks
Depends on your proof system.
-2ikrase
Teehehehehe. Of course, this becomes a thousand times worse with a combination of materialism and the attitude that the world is made not of matter, but of conflicting agendas.
2Desrtopa
The friend in question wouldn't buy that argument though, because rather than accepting as a premise that they hold inconsistent axioms, they would assert that they don't apply things like axioms to their reasoning about social justice. Plus, it's not likely to reflect their impression of their own actions. They're probably not trying to logically derive conclusions from a set of conflicting premises so much as they're following their native moral instincts, which may be internally inconsistent, but certainly do not have unlimited elasticity of output. You can get an ordinary person to respond to the same moral dilemma in different ways by framing it differently, but there are some conclusions that they cannot be convinced to draw, and others that they will uphold consistently, so if they're told that their belief system can derive any result, their response is likely to be "What? No it can't."
1Eugine_Nier
In practice this tends to manifest as being able to rationalize any result.
0Desrtopa
They'll tend to rationalize whatever results they output, but that doesn't mean that they'll output just any result.
0Eugine_Nier
Unfortunately the results they output tend to resemble this.

I really liked this post, and I think a lot of people aren't giving you enough credit. I've felt similarly before -- not to the point of suicide, and I think you might want to find someone who you can confide those anxieties with -- but about being angered at someone's dismissal of rationalist methodology. Because ultimately, it's the methodology which makes someone a rationalist, not necessarily a set of beliefs. The categorizing of emotions as in opposition to logic for example is a feature I've been frustrated with for quite some time, because emotions ... (read more)

[-]knb20

They seemed to be having trouble understanding what I was saying, and it was hard to get an opinion out of them. They also got angry at me for dismissing Tumblr as a legitmate source of social justice.

Relevant/funny comic.

I am confused why your friend thought good social justice arguments do not use logic to defend their claims. Good arguments of any kind use logic to defend their claims. Ergo, all the good social justice arguments are using logic to defend their claims. Why did you not say this to your friend?

EDIT: Also confused about your focus on axioms. Axioms, though essential, are the least interesting part of any logical argument. If you do not accept the same axioms as your debate partner, the argument is over. Axioms are by definition not mathematically demonstrable. In your post, you stated that axioms could be derived from other fundamental axioms, which is incorrect. Could you clarify your thinking on this?

[-][anonymous]20

Did I miss any sort of deeper reasons I could be using for this?

"That one guy I know and the stuff they say" is usually not a great proxy for some system of belief in general; therefore, to the extent you care about the same stuff they care about when they say "social justice", a knee-jerk rejection of thinking about that stuff systematically or in terms of axioms and soforth should probably come off to you as self-defeating or shortsighted.

2 sounds wrong to me - like you're trying to explain why having a consistent internal belief structure is important to someone who already believes that.

The things which would occur to me are:

  • If both of you are having reactions like this then you're dealing with status, in-group and out-group stuff, taking offense, etc. If you can make it not be about that and be about the philosophical issues - if you can both get curious - then that's great. But I don't know how to make that happen.
  • Does your friend actually have any contradictory beliefs? Do they beli
... (read more)

The refuse of your friend to axiomatize the theory of social justice doesn't necessarily imply that he believes that social justice can be governed by incoherence (theory here is used in its model theory meaning: a set of true propositions). It may under a (admittedly a little stretched) charitable reading just means that your friend believes it's incompressible: the complexity of the axioms is as great as the complexity of the facts the axioms would want to explain.
It's just like the set of all arithmetical truths: you cannot axiomatize it, but it's for sure not inconsistent.

0Qiaochu_Yuan
Mega-nitpicks: 1) it is possible to axiomatize the set of all arithmetical truths by taking as your axioms the set of all arithmetical truths. The problem with this axiomatization is that you can't tell what is and isn't an axiom, which is why Gödel's theorem is about recursively enumerable axiomatizations instead of arbitrary ones, and 2) it is very likely that Peano arithmetic is consistent, but this isn't a proposition I would assign probability 1.
0MrMind
Yes, I've thought to add "recursively" to the original statement, but I felt that the word "axiomatize" in the OP carried the meaning of somehow reducing the number of statement, so I decided not to write it. But of course the trivial axiomatization is always possible, you're totally correct. Heh, things get murky really quickly in this field. It's true that you can prove arithmetic consistent inside a stronger model, and it's true that there are non-standard submodel that think they are inconsistent while being consistent in the outer model. There are also models (paraconsistent in the meta-logic) that can prove their own consistency, avoiding Goedel theorem(s). This means that semantically, from a formal point of view, we cannot hope to really prove anything about some true consistency. I admittedly took a platonist view in my reply.
4Qiaochu_Yuan
Sure we can. If we found a contradiction in Peano arithmetic, we'd prove that Peano arithmetic is inconsistent.

Therefore, assuming the first few statements, having an internally consistent belief system is desirable!

I think that's a straw man. Nobody denies that it's advantagous to have a consistent belief system. People rather argue that consistency isn't the only criteria on which to judge belief systems.

It pretty easy to make up a belief system that's internally consistient but that leads to predictions about reality that are wrong.

A good example would be the problem of hidden Markov models. There are different algorithms to generate a path. The Viterbi al... (read more)

I don't have anything to add, other than to say that I've had similar frustrations with people. It was mainly in my heyday of debating theists on the Internet. I quite often would encounter the same exact dismissal of logic when presenting a logical argument against the existence of god; literally, they would say something like "you can't use that logic stuff on god" (check out stuff like the presuppositional argument for the existence of god if you want to suffer a similar apoplexy). Eventually, I just started to find it comical.

0ChristianKl
Did a theist really get you to contemplate suicide by making that argument in an internet discussion? If not, then I don't think that you felt a frustration that similar to what the guy in the first post felt and read something into the post that isn't there.
[-][anonymous]00

They seemed to be having trouble understanding what I was saying, and it was hard to get an opinion out of them. They also got angry at me for dismissing Tumblr as a legitmate source of social justice.

Relevant/funny comic

[This comment is no longer endorsed by its author]Reply
[-][anonymous]00

Also, a general comment. Suppose you think that the optimal algorithm for solving a problem is X. It does not follow that making your algorithm look more like X will make it a better algorithm. X may have many essential parts, and making your algorithm look more like X by imitating some but not all of its essential parts may make it much worse than it was initially. In fact, a reasonably efficient algorithm which is reasonably good at solving the problem may look nothing like X.

This is to say that at the end of the day, the main way you should be criticizing your friend's approach to social justice is based on its results, not based on aesthetic opinions you have about its structure.

[This comment is no longer endorsed by its author]Reply

In order to make a rationalist extremely aggravated, you can tell them that you don't think that belief structures should be internally logically consistent.

There are ways to argue for that too. Both the aggravated rationalist and your friend have inconsistent belief systems, as you say the difference is just that the aggravated rationalist would like for that to change, while your friend is fine with that.

The point is this: you can value keeping the "is"-state, and not want to change the as-is for some optimized but partly different / differe... (read more)