Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Vladimir_M comments on Offense versus harm minimization - Less Wrong

60 Post author: Yvain 16 April 2011 01:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (417)

You are viewing a single comment's thread.

Comment author: Vladimir_M 16 April 2011 01:50:28AM *  98 points [-]

Yvain:

The offender, for eir part, should stop offending as soon as ey realizes that the amount of pain eir actions cause is greater than the amount of annoyance it would take to avoid the offending action, even if ey can't understand why it would cause any pain at all.

In a world where people make decisions according to this principle, one has the incentive to self-modify into a utility monster who feels enormous suffering at any actions of other people one dislikes for whatever reason. And indeed, we can see this happening to some extent: when people take unreasonable offense and create drama to gain concessions, their feelings are usually quite sincere.

You say, "pretending to be offended for personal gain is... less common in reality than it is in people's imaginations." That is indeed true, but only because people have the ability to whip themselves into a very sincere feeling of offense given the incentive to do so. Although sincere, these feelings will usually subside if they realize that nothing's to be gained.

Comment author: fburnaby 16 April 2011 01:51:34PM *  17 points [-]

Beautifully put. So according to your objection, if I want to increase net utility, I have two considerations to make:

  • reducing the offense I cause directly increases net utility (Yvain)
  • reducing the offense I cause creates a world with stronger incentives for offense-taking, which is likely to substantially decrease net utility in the long-term (Vladmir_M)

This seems like a very hard calculation. My intuition is that item 2 is more important since it's a higher level of action, and I'm that kind of guy. But how do I rationally make this computation without my own biases coming in? My own opinions on "draw Mohammed day" have always been quite fuzzy and flip-floppy, for example.

Comment author: Torben 19 April 2011 12:57:47PM 3 points [-]

But how do I rationally make this computation without my own biases coming in?

One way is to try and compare similar countries where such offensiveness bans are enforced or not, and see which direction net migration is.

This may be difficult since countries without such bans will in all likely become more prosperous than those with them.

Another alternative might be comparing the same country before and after such laws, e.g. Pakistan.

Comment author: fburnaby 20 April 2011 12:54:17PM 2 points [-]

"Look at the world". Always a good answer!

I have a bad head for history. Do you know of anyone who has done this for me, ala Jared Diamond, for the case of free speech? It seems like it may still be hard to find someone who is plausibly unbiased on such a topic.

Comment author: DanArmak 23 April 2011 02:11:01PM 1 point [-]

One way is to try and compare similar countries where such offensiveness bans are enforced or not, and see which direction net migration is.

There are many other factors affecting migration. Is it possible to evaluate a single factor's direct influence?

Comment author: Torben 24 April 2011 10:32:13AM 0 points [-]

I don't know.

Perhaps "freedom of speech" (or whatever variable to call it) is so tightly bundled with other variables -- most of all affluence -- that it's impossible to asses properly.

OTOH, if this bundling is evident across nations, cultures and time, it probably means that it truly is an important part of a net desirable society?

Comment author: a363 18 April 2011 09:30:15AM 5 points [-]

That is indeed true, but only because people have the ability to whip themselves into a >very sincere feeling of offense given the incentive to do so. Although sincere, these >feelings will usually subside if they realize that nothing's to be gained.

I'm reminded of how small children might start crying when they trip and fall and skuff their knee, but will only keep on (and/or escalate) crying if someone is nearby to pay attention...

Comment author: Lightwave 18 April 2011 09:34:28AM *  4 points [-]

people have the ability to whip themselves into a very sincere feeling of offense given the incentive to do so. Although sincere, these feelings will usually subside if they realize that nothing's to be gained.

I agree with what you're saying and it sounds logical, and I'm just wondering if you (or anyone, actually) would have some experimental evidence from psychology (or any related field) that people do that.

This view does seem to be somewhat intuitive to lesswrongers, but if you try to present it to outsiders, it would be nice if it's backed by evidence from experimental research.

So anyone?

Comment author: Yvain 16 April 2011 07:30:15PM *  2 points [-]

I'm not sure people can voluntarily self-modify in this way. Even if it's possible, I don't think most real people getting offended by real issues are primarily doing this.

Voluntary self-modification also requires a pre-existing desire to self-modify. I wouldn't take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don't really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him. The only point at which I would take such a pill is if I already cared enough about the honor of Mohammed that I was willing to die for him. Since people have risked their lives and earned lots of prison time protesting the Mohammed cartoons, even before they started any self-modification they must have had strong feelings about the issue.

If X doesn't offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn't offend you? I think you might be thinking of attempts to create in-group cohesion and signal loyalty by uniting against a common "offensive" enemy, something that I agree is common. But these attempts cannot be phrased in the consequentialist manner I suggested earlier and still work - they depend on a "we are all good, the other guy is all evil" mentality.

Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.

One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop. Even if you don't like the latter part, I think the advice for the former might still be useful.

Comment author: HughRistik 16 April 2011 08:29:26PM 14 points [-]

Voluntary self-modification also requires a pre-existing desire to self-modify.

People have motives to increase their status, so we can check this box. Of course, this depends on phenotype, and some people do this much more than others.

I wouldn't take a pill that made me want to initiate suicide attacks on people who insulted the prophet Mohammed, because I don't really care if people insult the prophet Mohammed enough to want to die in a suicide attack defending him.

You can't self-modify to an arbitrary belief, but you can self-modify towards other beliefs that are close to yours in belief space. See my comment about political writers. You can seek out political leaders, political groups, or even just friends, with beliefs slightly more radical than yours along a certain dimension (and you might be inspired to do so with just small exposure to them). Over time, your beliefs may shift.

If X doesn't offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn't offend you?

To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group. When you get mad about stuff and complain about it, you feel like you are accomplishing something.

Thus, someone who responded with a cost/benefit calculation to all respectful and reasonable demands to stop offending, but continued getting touchy about disrespectful blame-based demands to stop offending, would be pretty hard to game.

The problem is that other people only care if you are with them or against them; they don't care about your calculation.

The second problem is that it can be hard to distinguish these two things. People who have a sufficiently valid beef might be justified in making blame-based demands to stop offending, and your demand that they sound "respectful" and "reasonable" is itself unreasonable. Of course, people without a valid beef will use this exact same reasoning about why you can't make a "tone argument" against them asking for them to sound more respectful and reasonable.

There might be a correlation between offense and the "validity" of the underlying issue, but this correlation is low enough that it can be hard to predict the validity of the underlying issue from how the offense reaction is expressed, which weakens the utility of the strategy you propose for identifying beefs.

However, your strategy might be useful as a Schelling Point for what sort of demands you'll accept from others.

One difference between this post and the original essay I wrote which more people liked was that the original made it clearer that this was more advice for how people who were offended should communicate their displeasure, and less advice for whether people accused of offense should stop.

It may have been tough to get the message, because the British salmon example is hypothetical. A real-world example of some group succeeding in claims of offensive might be useful.

Comment author: Yvain 16 April 2011 08:43:34PM *  40 points [-]

Okay. I formally admit I'm wrong about the "should usually stop offensive behavior" thing (or, rather, I don't know if I'm wrong but I formally admit my previous arguments for thinking I was right no longer move me and I now recognize I am confused.)

I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don't know if anyone is challenging that.

Comment author: Wei_Dai 17 April 2011 09:12:02PM 19 points [-]

I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don't know if anyone is challenging that.

"Request to change" is low status, while "demand to change" is high status. The whole point of taking offense is that some part of your brain detects a threat to your status or an opportunity to increase status, so how can it be "better" to act low status when you feel offended? Well, it may be better if you think you should dis-identify with that part of your brain, and believe that even if some part of your brain cares a lot about status, the real you don't. But you have to make that case, or state that as an assumption, which you haven't, as far as I can tell (although I haven't carefully read this whole discussion).

Here's an example in case the above isn't clear. Suppose I'm the king of some medieval country, and one of my subjects publicly addresses me without kneeling or call me "your majesty". Is it better for me to request him to do so in the language of harm-minimization ("I'm hurt that you don't consider me majestic"?), or to make a demand phrased in the language of offense?

Comment author: Vladimir_M 16 April 2011 09:29:25PM *  4 points [-]

I still believe that if you find something offensive, a request to change phrased in the language of harm-minimization is better than a demand to change phrased in the language of offense, but I don't know if anyone is challenging that.

I see at least two huge problems with the harm-minimization approach.

First, it requires interpersonal comparison of harm, which can make sense in very drastic cases (e.g. one person getting killed versus another getting slightly inconvenienced), but it usually makes no sense in controversial disputes such as these.

Second, even if we can agree on the way to compare harm interpersonally, the game-theoretic concerns discussed in this thread clearly show that naive case-by-case harm minimization is unsound, since any case-by-case consequences of decisions can be overshadowed by the implications of the wider incentives and signals they provide. This can lead to incredibly complicated and non-obvious issues, where the law of unintended consequences lurks behind every corner. I have yet to see any consequentialists even begin to grapple with this problem convincingly, on this issue or any other.

Comment author: Yvain 16 April 2011 09:32:24PM 2 points [-]

We may be talking at cross-purposes. Are you arguing that if someone says something I find offensive, it is more productive for me to respond in the form of "You are a bad person for saying that and I demand an apology?" than "I'm sorry, but I was really hurt by your statement and I request you not make it again"?

Comment author: Vladimir_M 16 April 2011 10:05:44PM *  7 points [-]

It depends; there is no universal rule. Either response could be more appropriate in different cases. There are situations where if someone's statements overstep certain lines, the rational response is to deem this a hostile act and demand an apology with the threat of escalation. There are also situations where it makes sense to ask people to refrain from hurtful statements, since the hurt is non-strategic.

Also, what exactly do you mean by "productive"? People's interests may be fundamentally opposed, and it may be that the response that better serves the strategic interest of one party can do this only at the other's expense, with neither of them being in the right in any objective sense.

Comment author: kurokikaze 18 April 2011 09:38:07AM 1 point [-]

Maybe the most productive variant is just to ignore the offender/offence?

On a slightly unrelated note, one psychologist I know has demonstrated me that sometimes it's more useful to agree with offence on the spot, whatever it is, and just continue with conversation. So I think in some situations this too may be a viable option.

Comment author: torekp 17 April 2011 01:08:24PM 2 points [-]

To protect/raise the status of you yourself, or of a group you identify with. I proposed in that comment that people might enjoy feeling righteous while watching out for the interests of themselves and their in-group.

So I can raise the status of my group by becoming a frequent complainer and encouraging my fellows to do likewise?

I won't say that it never happens. I will say that the success prospects of that sort of strategy have been exaggerated of late.

Comment author: bgaesop 17 April 2011 11:07:37PM 2 points [-]

So I can raise the status of my group by becoming a frequent complainer and encouraging my fellows to do likewise?

Sure. See, for example, the rise in prominence of the Gnu Atheists (of which I am one).

Comment author: Vladimir_M 16 April 2011 08:52:35PM *  33 points [-]

If X doesn't offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn't offend you?

It's a Schellingian idea: in conflict situations, it is often a rational strategy to pre-commit to act irrationally (i.e. without regards to cost and benefit) unless the opponent yields. The idea in this case is that I'll self-modify to care about X far more than I initially do, and thus pre-commit to lash out if anyone does it.

If we have a dispute and I credibly signal that I'm going to flip out and create drama out of all proportion to the issue at stake, you're faced with a choice between conceding to my demands or getting into an unpleasant situation that will cost more than the matter of dispute is worth. I'm sure you can think of many examples where people successfully get the upper hand in disputes using this strategy. The only way to disincentivize such behavior is to pre-commit credibly to be defiant in face of threats of drama. In contrast, if you act like a (naive) utilitarian, you are exceptionally vulnerable to this strategy, since I don't even need drama to get what I want, if I can self-modify to care tremendously about every single thing I want. (Which I won't do if I'm a good naive utilitarian myself, but the whole point is that it's not a stable strategy.)

Now, the key point is that such behavior is usually not consciously manipulative and calculated. On the contrary -- someone flipping out and creating drama for a seemingly trivial reason is likely to be under God-honest severe distress, feeling genuine pain of offense and injustice. This is a common pattern in human social behavior: humans are extremely good at detecting faked emotions and conscious manipulation, and as a result, we have evolved so that our brains lash out with honest strong emotion that is nevertheless directed by some module that performs game-theoretic assessment of the situation. This of course prompts strategic responses from others, leading to a strategic arms race without end.

The further crucial point is that these game-theoretic calculators in our brains are usually smart enough to assess whether the flipping out strategy is likely to be successful, given what might be expected in response. Basically, it is a part of the human brain that responds to rational incentives even though it's not under the control of the conscious mind. With this in mind, you can resolve the seeming contradiction between the sincerity of the pain of offense and the fact that it responds to rational incentives.

All this is somewhat complicated when we consider issues of group conflict rather than individual conflict, but the same basic principles apply.

Comment author: NancyLebovitz 19 April 2011 02:33:32PM 0 points [-]

Do you have strategies for distinguishing between game theoretic exaggeration of offense vs. natural offense?

Comment author: Vladimir_M 19 April 2011 06:52:28PM 9 points [-]

The question is better phrased by asking what will be the practical consequences of treating an offense as legitimate and ceasing the offending action (and perhaps also apologizing) versus treating it as illegitimate and standing your ground (and perhaps even escalating). Clearly, this is a difficult question of great practical value in life, and like every such question, it's impossible to give a simple and universally applicable answer. (And of course, even if you know the answer in some concrete situation, you'll need extraordinary composure and self-control to apply it if it's contrary to your instinctive reaction.)

Comment author: Eugine_Nier 19 April 2011 05:01:36PM 3 points [-]

Do you have strategies for distinguishing between game theoretic exaggeration of offense vs. natural offense?

I don't see the distinction you're trying to make.

Comment author: NancyLebovitz 19 September 2012 12:02:54PM 0 points [-]

Tentatively-- game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.

However, there's another sort of breakdown of negotiations that just occurred to me. If A asks for less than they want because they think that's all they can get and/or they're trying to do a utilitarian calculation, they aren't going to be happy even if they get it. This means they're likely to push for more even if they get it, and then they start looking like a utility monster.

Comment author: Eugine_Nier 19 September 2012 10:44:47PM *  0 points [-]

Tentatively-- game theoretic exaggeration of offense will simply be followed by more and more demands. Natural offense is about a desire that can be satiated.

What do you mean by "satiated"?

From a utilitarian/consequentialist point of view, a desire being "satiated" simply means that the marginal utility gains from pursuing it further are less than opportunity cost of however much effort it takes.

Note that by this definition when a desire is satiated depends on how easy it is to pursue.

Comment author: NancyLebovitz 19 September 2012 11:03:53PM 1 point [-]

If you're hungry you might feel as though you could just keep eating and eating. However, if enough food is available, you'll stop and hit a point where more food would make you feel worse instead of better. You'll get hungry again, but part of the cycle includes satiation. For purposes of discussion, I'm talking about most people here, not those with eating disorders or unusual metabolisms that affect their ability to feel satiety.

I think most people have a limit on their desire for status, though that might be more like the situation you describe. Few would turn down a chance to be the world's Dictator for Life, but they've hit a point where trying for more status than they've got seems like too much trouble.

Comment author: Costanza 16 April 2011 08:39:36PM *  6 points [-]

If X doesn't offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn't offend you?

Status games. There's a satirical blog which addresses this, at least in the context of Western sophisticates:

....the threshold for being offended is a very important tool for judging and ranking white people. Missing an opportunity to be outraged is like missing a reference to Derrida-it’s social death.

ETA: In the context of Islamic reaction to the Mohammed cartoons as well as the burning of a Koran, there may be some value for a demogogue to conjure up atrocities by some demonized enemy in order to unite his (and in this case, it will be "his") followers. Westerners have done the same sorts of things as well, most obviously in wartime propaganda.

Comment author: steven0461 16 April 2011 08:42:26PM *  3 points [-]

If X doesn't offend you, why would self-modify to make X offend you to stop people from doing X, since X doesn't offend you?

Surely there are a great many reasons other than offense why, for various different things X, it might be (or seem) useful to me to stop you from doing thing X. For example, if thing X is "mocking my beliefs": if my beliefs are widely respected, I and people like me will have a larger share of influence than if my beliefs are widely mocked.

Comment author: DanArmak 23 April 2011 02:19:46PM 0 points [-]

I'm not sure people can voluntarily self-modify in this way. Even if it's possible, I don't think most real people getting offended by real issues are primarily doing this.

I think such modification mostly happens on the level of evolution, especially cultural and memetic evolution. Individual humans are adaptation executers who can't deliberately self-modify in this way, but those who are more pre-modified are more evolutionarily successful.

Comment author: shokwave 16 April 2011 02:18:37PM 2 points [-]

My real-world working theory on utility monsters of the type you describe is basically to keep in mind that some people are more sensitive than others, but if anyone reaches utility monster levels (roughly indicated by whether I think "this is completely absurd"), I flip the sign on their utility function.

Comment author: jimrandomh 16 April 2011 03:02:11PM *  14 points [-]

Excuse me, but I think you should recheck your moral philosophy before you get the chance to act on that. Are you sure that shouldn't be "become indifferent with respect to optimizing their utility function", or perhaps "rescale their utility function to a more reasonable range"? Because according my moral philosophy, explicitly flipping the sign of another agent's utility function and then optimizing is an evil act.

Comment author: TheOtherDave 16 April 2011 02:51:53PM 3 points [-]

My own real-world working theory is that if someone I respect in general expresses a sensitivity that I consider completely absurd, I reduce my level of commitment to my process for evaluating the absurdity of sensitivities.

Comment author: Desrtopa 16 April 2011 02:20:33PM 0 points [-]

So you consider it to be a major source of positive utility to antagonize them?

Comment author: shokwave 16 April 2011 06:52:52PM 2 points [-]

Tongue-in-cheek, yes.

Comment author: Perplexed 16 April 2011 01:56:14PM 2 points [-]

In a world where people make decisions according to this principle, one has the incentive to self-modify into a utility monster who feels enormous suffering at any actions of other people one dislikes for whatever reason.

The incentive is weaker than you seem to suggest. Surely, I gain nothing tangible by inducing people to tiptoe carefully around my minefield. Only a feeling of power, or perhaps some satisfaction at having caused inconvenience to my enemies. So, what is the more fruitful maxim to follow so as to discourage this kind of thing?

  • Don't feed the utility monster.

or

  • Poke the utility monster with a stick until it desensitizes.

Somehow I have to think that poking is a form of capitulation to the manipulation - it is voluntary participation in a manufactured drama.

Comment author: florian 16 April 2011 03:59:25PM 10 points [-]

The incentive is weaker than you seem to suggest. Surely, I gain nothing tangible by inducing people to tiptoe carefully around my minefield.

Yes, you do. If everything unpleasant to you causes you a huge amount of suffering instead of, say, mild annoyance, other people (utilitarians) will abstain from doing things that are unpleasant to you as the negative utility to you outweighs the positive utility to them.

Comment author: Perplexed 16 April 2011 05:18:16PM 4 points [-]

What you say is certainly true if the utility monster is simply exaggerating. But I understood VM to be discussing someone who claims offense where no offense (or negligible offense) actually exists. Or, someone who self-modifies to sincerely feel offended, though originally there was no such sensitivity.

But in any case, the real source of the problem in VM's scenario is adhering to an ethical system which permits one to be exploited by utility monsters - real or feigned. My own ethical system avoids being exploited because I accept personal disutility so as to produce utility for others only to the extent that they reciprocate. So someone who exaggerates the disutility they derive from, say, my humming may succeed in keeping me silent in their presence, but this success may come at a cost regarding how much attention I pay to their other desires. So the would-be utility monster is only hurting itself by feeding me false information about its utility function.

Comment author: Vladimir_M 17 April 2011 11:27:52PM 15 points [-]

But I understood VM to be discussing someone who claims offense where no offense (or negligible offense) actually exists.

The crucial point is that the level of offense at a certain action -- and I mean real, sincerely felt painful offense, not fake indignation -- is not something fixed and independent of the incentives people face. This may seem counterintuitive and paradoxical, but human brains do have functions that are not under direct control of the conscious mind, and are nevertheless guided by rational calculations and thus respond to incentives. People creating drama and throwing tantrums are a prime example: their emotions and distress are completely sincere, and their state of mind couldn't be further from calculated pretense, and yet whatever it is in their brains that pushes them into drama and tantrums is very much guided by rational strategic considerations.

Comment author: Strange7 17 April 2011 11:38:56PM 1 point [-]

Only in the sense that a country with secure borders is hurting itself by forfeiting potential gains from trade. If what they want is to avoid being contaminated by your ideas, to avoid being criticized, that minefield is doing it's job just fine.