This is a fictional piece based on Sort By Controversial. You do not need to read that first, though it may make Scissor Statements feel more real. Content Warning: semipolitical. Views expressed by characters in this piece are not necessarily the views of the author.

I stared out at a parking lot, the pavement cracked and growing grass. A few cars could still be seen, every one with a shattered windshield or no tires or bashed-in roof, one even laying on its side. Of the buildings in sight, two had clearly burned, only blackened reinforced concrete skeletons left behind. To the left, an overpass had collapsed. To the right, the road was cut by a hole four meters across. Everywhere, trees and vines climbed the remains of the small city. The collapsed ceilings and shattered windows and nests of small animals in the once-hospital behind me seemed remarkably minor damage, relatively speaking.

Eighty years of cryonic freeze, and I woke to a post-apocalyptic dystopia.

“It’s all like that,” said a voice behind me. One of my… rescuers? Awakeners. He went by Red. “Whole world’s like that.”

“What happened?” I asked. “Bioweapon?”

“Scissor,” replied a woman, walking through the empty doorway behind Red. Judge, he’d called her earlier.

I raised an eyebrow, and waited for elaboration. Apparently they expected a long conversation - both took a few seconds to get comfortable, Red leaning up against the wall in a patch of shade, Judge righting an overturned bench to sit on. It was Red who took up the conversation thread.

“Let’s start with an ethical question,” he began, then laid out a simple scenario. “So,” he asked once finished, “blue or green?”.

“Blue,” I replied. “Obviously. Is this one of those things where you try to draw an analogy from this nice obvious case to a more complicated one where it isn’t so obvious?”

“No,” Judge cut in, “It’s just that question. But you need some more background.”

“There was a writer in your time who coined the term ‘scissor statement’,” Red explained, “It’s a statement optimized to be as controversial as possible, to generate maximum conflict. To get a really powerful scissor, you need AI, but the media environment of your time was already selecting for controversy in order to draw clicks.”

“Oh no,” I said, “I read about that… and the question you asked, green or blue, it seems completely obvious, like anyone who’d say green would have to be trolling or delusional or a threat to society or something… but that’s exactly how scissor statements work…”

“Exactly,” replied Judge. “The answer seems completely obvious to everyone, yet people disagree about which answer is obviously-correct. And someone with the opposite answer seems like a monster, a threat to the world, like a serial killer or a child torturer or a war criminal. They need to be put down for the good of society.”

I hesitated. I knew I shouldn’t ask, but… “So, you two…”

Judge casually shifted position, placing a hand on some kind of weapon on her belt. I glanced at Red, and only then noticed that his body was slightly tensed, as if ready to run. Or fight.

“I’m a blue, same as you,” said Judge. Then she pointed to Red. “He’s a green.”

I felt a wave of disbelief, then disgust, then fury. It was so wrong, how could anyone even consider green... I took a step toward him, intent on punching his empty face even if I got shot in the process.

“Stop,” said Judge, “unless you want to get tazed.” She was holding her weapon aimed at me, now. Red hadn’t moved. If he had, I’d probably have charged him. But Judge wasn’t the monster here… wait.

I turned to Judge, and felt a different sort of anger.

“How can you just stand there?”, I asked. “You know that he’s in the wrong, that he’s a monster, that he deserves to be put down, preferably slowly and painfully!” I was yelling at Judge, now, pointing at Red with one hand and gesticulating with the other. “How can you work with him!?”

Judge held my eyes for a moment, unruffled, before replying. “Take a deep breath,” she finally said, “calm yourself down, take a seat, and I’ll explain.”

I looked down, eyed the tazer for a moment, closed my eyes, then did as she asked. Breathe in, breathe out. After a few slow breaths, I glanced around, then chose a fallen tree for a seat - positioning Judge between Red and myself. Judge raised an eyebrow, I nodded, and she resumed her explanation.

“You can guess, now, how it went down. There were warning shots, controversies which were bad but not bad enough to destroy the world. But then the green/blue question came along, the same question you just heard. It was almost perfectly split, 50/50, cutting across political and geographical and cultural lines. Brothers and sisters came to blows. Bosses fired employees, and employees sued.  Everyone thought they were in the right, that the other side was blatantly lying, that the other side deserved punishment while their side deserved an apology for the other side’s punishments. That they had to stand for what was right, bravely fight injustice, that it would be wrong to back down.”

I could imagine it. What I felt, toward Red - it felt wrong to overlook that, to back down. To let injustice pass unanswered.

“It just kept escalating, until bodies started to pile up, and soon ninety-five percent of the world population was dead. Most people didn’t even try to hole up and ride out the storm - they wanted to fight for what was right, to bring justice, to keep the light in the world.”

Judge shrugged, then continued. “There are still pockets here and there, where one side or the other gained the upper hand and built a stronghold. Those groups still fight each other. But most of what’s left is ruins, and people like us who pick over them.”

“So why aren’t you fighting?” I asked. “How can you overlook it?”

Judge sighed. “I was a lawyer, before Scissor.” She jerked her head toward Red. “He was too. We even came across each other, from time to time. We were both criminal defense attorneys, with similar clients in some ways, though very different motivations.

“Red was… not exactly a bleeding heart, but definitely a man of principles. He’d made a lot of money early on, and mostly did pro-bono work. He defended the people nobody else would take. Child abusers, serial killers, monsters who everyone knew were guilty. Even Red thought they were guilty, and deserved life in prison, maybe even a death sentence. But he was one of those people who believed that even the worst criminals had to have a proper trial and a strong defense, because it was the only way our system could work. So he defended the monsters. Man of principle.

“As for me, I was a mob lawyer. I defended gangsters, loan sharks, arms dealers… and their friends and families. It was the families who were the worst - the brothers and sons who sought sadistic thrills, knowing they’d be protected. But it was interesting work, the challenge of defending the undefendable, and it paid a fortune.

“We hated each other, back in the day. Still do, on some level. He was the martyr, the white knight putting on airs of morality while defending monsters. And I was the straightforward villain, fighting for money and kicks. But when Scissor came, we had one thing in common: we were both willing to work with monsters. And that turned out to be the only thing which mattered.”

I nodded. “So you hated each other, but you’d both spent years working with people you hated, so working with each other was… viable. You even had a basis to trust one another, in some weird way, because you each knew that the other could work with people they hated.”

“Exactly. In the post-scissor world, people who can work with monsters are basically the only people left. We form mixed groups - Red negotiates with Greens for us, I negotiate with Blues. They can tell, when they ask whether you’re Blue or Green - few people can lie convincingly, with that much emotion wrapped up in it. A single-color group would eventually encounter the opposite single-color group, and they’d kill each other. So when we meet other groups, they have some Blues and some Greens, and we don’t fight about it. We talk, we trade, we go our separate ways. We let the injustice sit, work with the monsters, because that’s the only way to survive in this world.

“And now you have to make a choice. You can go out in a blaze of glory, fight for what you know is right, and maybe take down a few moral monsters in the process. Or you can choose to live and let live, to let injustice go unanswered, to work with the monsters you hate. It’s up to you.”

New Comment
54 comments, sorted by Click to highlight new comments since:

Surely the story would be more believable if the POV character endorsed green rather than blue. As it is, I don't find them a very sympathetic character, and I can't imagine a reasonable audience would either.

Nice meta-comment. But it doesn't really work; green was very well chosen so that any right person with a modicum of brains and heart immediately detects it as both wrong and morally repugnant. To such an extent that I found it broke my suspension of disbelief that half of the future society would believe in green.

[-][anonymous]70

Which is a meta comment on present day I think, where the blue red divide is such that one of those sides clearly aligns with facts and physical reality while another side relies on made up stories.  

Except of course present day, the reason for one of these sides seems almost to be a self identity thing, where they don't really believe in their color's precepts, they just identify with the people in it more so that's the color they fly.

"The reason for one of these sides seems almost to be a self identity thing, where they don't really believe in their color's precepts, they just identify with the people in it"

Based on that, I know exactly the bastards you're talking about, and I don't believe anyone would be able to tolerate them as compatriots if they weren't totally dark triad, at least to some degree. So we're agreed we need to stand up for what's right and shut them all down before something serious happens?

Interestingly, if one looks at this story in terms of "what message is this story sending", then it feels like the explicit and implicit message are the opposites of each other.

The explicit message seems to be something like "cooperation with the other side is good, it can be the only way to survive".

But then if we think of this representing a "pro-cooperation side", we might notice that the story doesn't really give any real voice to the "anti-cooperation side" - the one which would point out that actually, there are quite a few situations when you absolutely shouldn't cooperate with monsters. The setup of the story is such that it can present a view from which the pro-cooperation side is simply correct, as opposed to looking at a situation where it's more questionable.

In the context of a fictional story making a point about the real world, I would interpret "cooperating with the other side" to mean something like "making an honest attempt to fairly present the case for the opposite position". Since this story doesn't do that, it reads to me like it's saying that we should cooperate with those who disagree with us... while at the same time not cooperating with the side that it disagrees with. 

I would word the intended message as "whether or not someone shares our values is not directly relevant to whether one should cooperate with them". Moral alignment is not directly relevant to the decision; it enters only indirectly, in reasoning about things like the need for enforcement or reputational costs. Monstrous morals should not be an immediate deal-breaker in their own right; they should weigh on the scales via trust and reputation costs, but that weight is not infinite.

I don't really think of it as "pro-cooperation" or "anti-cooperation"; there is no "pro-cooperation" "side" which I'm trying to advocate here.

Makes one wonder what kind of story could justify the opposite moral. I do think that moral would be "All that evil needs to win is for the good people to do nothing"

A story could also...give us the actual statement. I say 'If scissor statements are real, name 3.'

It’s peace vs conflict. Peace is cooperation and conflict is anti-cooperation. Is peace always the right answer? Maybe not, but it’s the one I’m going to pick most of the time.

I think it makes a pretty good case for the anti-cooperation side: you might get to kill some of your enemies before you get killed in turn. However, the correctness of any argument can only be judged by those who remain alive.

the correctness of any argument can only be judged by those who remain alive.

2+2=4

It can be judged by those who are alive, those who were alive, those who will be alive...need I say more?

If you think dead people can do arithmetic, I think you need to explain how that would work.

While they are dead no. While they were alive - yes, they could. (This is interesting in that, properly performed, (a specified) computation gets the same result, whatever the circumstances. More generally, Fermat argued that: for integers a, b, c, and n, where n>2, a^n+b^n=c^N:

  • had no solutions
  • was provable

He might have been wrong about the difficulty of proving it, but he was right about the above. If perhaps for the wrong reasons. (Can we prove Fermat didn't have a proof?))

A while ago, someone encouraged me read Homage to Catalonia, citing it as a book that’d dissuade people from revolutionary justice. And in particular, dissuade people from the notion that they should carefully guard who they work with, like a blue working with a green.

In fact, I found the book had the opposite effect. It describes a what amounts to a three-way war between anarchists, communists, and fascists during the Spanish Civil War. During that war, foreign communists and capitalists both benefited from continuing top-down company-owned business models in certain countries, and so strongly dissuaded a Spanish worker’s revolution, an agenda which Spanish stalinists cooperated with to continue receiving funding. The anarchists wanted that revolution, but were willing to team up with the stalinist bloc against the fascists, it seems, because they couldn’t fight both, and they saw the fascists as a greater threat. The stalinists (who did not want revolution) took advantage of the anarchists comparatively worse position to neuter them, rolling back worker-controlled factories and local-run governments, which were a threat to foreign interests.

The stalinist block would frame “winning the war” as a means to get the anarchists to surrender on all their hard won progress, saying, “well, we can fight over worker owned factories, or we can fight together against the fascists,” essentially holding the country hostage, using the fascists as a threat to get what they wanted. And in the end, they both lost to Franco. 

This example seems to be a primary reason for not working with people who aren’t value-aligned: they’ll undermine your position, using the excuse of “unity against the enemy.” Once you give ground on local worker-led patrols instead of police, the non-value-aligned group will start pressing for a return to centralized government, imperially-owned factories, and worker exploitation. Give them an inch, they take a mile.  

Moloch says, "throw what you love into the fire and I will grant you victory," but any such bargain is made under false pretenses. In making the deal, you've already lost. 

My model is that a blue and green working together would constantly undermine the other's cause, and when that cause is life and death, this is tantamount to working with someone towards your own end. Some things matter enough that you shouldn't capitulate, where capitulation is the admission that you don't really hold the values you claim to hold -- it would be like saying you believe in gravity, while stepping off a cliff with the expectation that you'll float.

Good example. I'll use this to talk about what I think is the right way to think about this.

First things first: true zero-sum games are ridiculously rare in the real world. There's always some way to achieve mutual gains - even if it's just "avoid mutual losses" (as in e.g. mutual assured destruction). Of course, that does not mean that an enemy can be trusted to keep a deal. As with any deal, it's not a good deal if we don't expect the enemy to keep it.

The mutual gains do have to be real in order for "working with monsters" to make sense.

That said... I think people tend to have a gut-level desire to not work with monsters. This cashes out as motivated stopping: someone thinks "ah, but I can't really trust the enemy to uphold their end of the deal, can I?"... and they use that as an excuse to not make any deal at all, without actually considering (a) whether there is actually any evidence that the enemy is likely to break the deal (e.g. track record), (b) whether it would actually be in the enemy's interest to break the deal, or (c) whether the deal can be structured so that the enemy has no incentive to break it. People just sort of horns-effect, and assume the Bad Person will of course break a deal because that would be Bad.

(There's a similar thing with reputational effects, which I expect someone will also bring up at some point. Reputational effects are real and need to be taken into consideration when thinking about whether a deal is actually net-positive-expected-value. But I think people tend to say "ah, but dealing with this person will ruin my reputation"... then use that as an excuse to not make a deal, without considering (a) how highly-visible/salient this deal actually is to others, (b) how much reputational damage is actually likely, or (c) whether the deal can plausibly be kept secret.)

true zero-sum games are ridiculously rare in the real world. There's always some way to achieve mutual gains - even if it's just "avoid mutual losses"

I disagree.

I think you're underestimating how deep value differences can be, and how those values play into everything a person does. Countries with nuclear weapons who have opposing interests are actively trying to destroy each other without destroying themselves in the process, and if you're curious about the failures of MAD, I'd suggest reading The Doomsday Machine, by Daniel Ellsberg. If that book is to be taken as mostly true, and the MWI is to be taken as true, then I suspect that many, many worlds were destroyed by nuclear missiles. When I found this unintuitive, I spent a day thinking about quantum suicide to build that intuition: most instances of all of us are dead because we relied on MAD. We're having this experience now where I'm writing this comment and you're reading it because everything that can happen will happen in some branch of the multiverse, meaning our existance is only weak evidence for the efficacy of MAD, and all of those very close calls are stronger evidence for our destruction in other branches. This doesn't mean we're in the magic branch where MAD works, it means we've gotten lucky so far. Our futures are infinite split branches of parallel mes and yous, and in most of those where we rely on strategies like MAD, we die.

...

Scissor statements reveal pre-existing differences in values, they don't create them. There really are people out there who have values that result in them doing terrible things. Furthermore, beliefs and values aren't just clothes we wear -- we act on them, and live by them. So it's reasonable to assume that if someone has a particularly heinous belief, and particularly heinous values, that they act on those beliefs and values.

In the ssc short story, scissor statements are used to tear apart mozambique, and in real life, we see propagandists using scissor statements to split up activist coalitions. It's not hypothetical, divide and conquer is a useful strategy that has been used probably since the dawn of time. But not all divides are created equal.

In the 1300s in rural France, peasants revolted against the enclosure of the commons, and since many of these revolts were led by women, the nascent state officials focused their efforts on driving a (false) wedge between men and women, accusing those women of being witches & followers of satan. Scissor statements (from what I can tell) are similar in that they're a tactic used to split up a coalition, but different in that they're not inventing conflict. It doesn't seem to make much of a difference in terms of outcome (conflict) once people have sorted themselves into opposing groups, but equating the two is a mistake. You're losing something real if you ally yourself with someone you're not value-aligned with, and you're not losing something real if you're allying yourself with someone you are value-aligned with, but mistakenly think is your enemy. The amount of power people like you with your value has loses strength because now another group that wants to destroy you has more power.

If two groups form a coalition, and gorup_A values "biscuits for all," and group_B values "cookies for all," and someone tries to start a fight between them based on this language difference, it would be tragic for them to fight. Because it should be obvious that what they want is the same thing, they're just using different language to talk about it. And if they team up, group_A won't be tempted to deny group_B cookies, because they deep-down value cookies for all, including group_B. It's baked into their decision making process.

(And if they decide that what they want to spend all their time doing is argue over whether they should call their baked food product "cookies" or "biscuits," then what they actually value is arguing about pedantry, not "cookies for all.")

But in a counter example, if group_A values "biscuits for all" and group_B values "all biscuits for group_B," then group_B will find it very available and easy to think of strategies which result in biscuits for group_B and not group_A. If someone is having trouble imagining this, that may be because it's difficult to imagine someone only wanting the cookies for themselves, so they assume the other group wouldn't defect, because "cookies for all? What's so controversial about that?" Except group_B fundamentally doesn't want group_A getting their biscuits, so any attempt at cooperation is going to be a mess, because group_A has to keep double-checking to make sure group_B is really cooperating, because it's just so intuitive to group_B not to that they'll have trouble avoiding it. And so giving group_B power is like giving someone power when you know they're later going to use it to hurt you and take your biscuits.

And group_B will, because they value group_B having all the biscuits, and have a hard time imagining that anyone would actually want everyone to have all the biscuits, unless they're lying or virtue signalling or something. And they'll push and push because it'll seem like you're just faking.

...

I find the way people respond to scissor statements ("don't bring that up, it's a scissor statement/divisive!") benefits only the status quo. And if the status quo benefits some group of people, then of course that group is going to eschew divisiveness. 

...

To bring it back to the Spanish Civil War, the communists were willing to ally themselves with big businesses, businesses who were also funding the fascists. They may have told themselves it was a means to an end, and for all I know (because my knowledge of the Spanish Civil War is limited only to a couple books,) the communists may have been planning to betray those big business interests, in the end. But in the mean time, they advanced the causes of those big business interests, and undermined the people who stood against everything the fascists fought for. It's difficult to say what would've happened if the anarchists had tried a gambit to force the hand of big business to pick a side (communist or fascist) or simply ignored the communists' demands. But big business interests were more supportive of Franco winning (because he was good for business), and their demands of the communists in exchange for money weakened the communists' position, and because the communists twisted the arms of the anarchists & the anarchists went along with it, this weakened their position, too. And in the end, the only groups that benefitted from that sacrifice were big business interests and Franco's fascists.

...

whether the deal can plausibly be kept secret.

That's a crapshoot, especially in the modern day. Creating situations where groups need to keep secrets in order to function is the kind of strategy Julian Assange used to cripple government efficiency. The correct tactic is to keep as few secrets from your allies as you can, because if you're actually allies, then you'll benefit from the shared information. 

The effectiveness or ineffectiveness of MAD as a strategy is not actually relevant to whether nuclear war is or is not a zero-sum game. That's purely a question of payoffs and preferences, not strategy.

You're losing something real if you ally yourself with someone you're not value-aligned with, and you're not losing something real if you're allying yourself with someone you are value-aligned with, but mistakenly think is your enemy. The amount of power people like you with your value has loses strength because now another group that wants to destroy you has more power.

The last sentence of this paragraph highlights the assumption: you are assuming, without argument, that the game is zero-sum. That gains in power for another group that wants to destroy you is necessarily worse for you.

This assumption fails most dramatically in the case of three or more players. For instance, in your example of the Spanish civil war, it's entirely plausible that the anarchist-communist alliance was the anarchists' best bet - i.e. they honestly preferred the communists over the fascists, the fascists wanted to destroy them even more than the communists, and an attempt at kingmaking was only choice the anarchists actually had the power to make. In that world, fighting everyone would have seen them lose without any chance of gains at all.

In general, the key feature of a two-player zero-sum game is that anything which is better for your opponent is necessarily worse for you, so there is no incentive to cooperate. But this cannot ever hold between all three players in a three-way game: if "better for player 1" implies both "worse for player 2" and "worse for player 3", then player 2 and player 3 are incentivized to cooperate against player 1. Three player games always incentivize cooperation between at least some players (except in the trivial case where there's no interaction at all between some of the players). Likewise in games with more than three players. Two-player games are a weird special case.

That all remains true even if all three+ players hate each other and want to destroy each other.

But in a counter example, if group_A values "biscuits for all" and group_B values "all biscuits for group_B," then group_B will find it very available and easy to think of strategies which result in biscuits for group_B and not group_A. If someone is having trouble imagining this, that may be because it's difficult to imagine someone only wanting the cookies for themselves, so they assume the other group wouldn't defect, because "cookies for all? What's so controversial about that?" Except group_B fundamentally doesn't want group_A getting their biscuits, so any attempt at cooperation is going to be a mess, because group_A has to keep double-checking to make sure group_B is really cooperating, because it's just so intuitive to group_B not to that they'll have trouble avoiding it. And so giving group_B power is like giving someone power when you know they're later going to use it to hurt you and take your biscuits.

Note that, in this example, you aren't even trying to argue that there's no potential for mutual gains. Your actual argument is not that the game is zero-sum, but rather that there is overhead to enforcing a deal.

It's important to flag this, because it's exactly the sort of reasoning which is prone to motivated stopping. Overhead and lack of trust are exactly the problems which can be circumvented by clever mechanism design or clever strategies, but the mechanisms/strategies are often nonobvious.

That gains in power for another group that wants to destroy you is necessarily worse for you.

Yes. In many real-life scenarios, this is true. In small games where the rules are blatant, it's easier to tell if someone is breaking an agreement or trying to subvert you, so model games aren't necessarily indicative of real-world conditions. For a real life example, look at the US's decision to fund religious groups to fight communists in the middle east. If someone wants to destroy you, during the alliance they'll work secretly to subvert you, and after the alliance is over, they'll use whatever new powers they have gained to try to destroy you.

People make compromises that sacrifice things intrinsic to their stated beliefs when they believe it is inevitable they'll lose — by making the "best bet" they were revealing that they weren't trying to win, that they've utterly given up on winning. The point of anarchy is that there is no king. For an anarchist to be a kingmaker is for an anarchist to give up on anarchy.

And from a moral standpoint, what about the situation where someone is asked to work with a rapist, pedophile, or serial killer? We're talking about heinous beliefs/actions here, things that would make someone a monster, not mundane "this person uses ruby and I use python," disagreements. What if working with a {rapist,pedo,serial killer} means they live to injure and kill another day? If that's the outcome, by working with them you're enabling that outcome by enabling them.

The last sentence of this paragraph highlights the assumption: you are assuming, without argument, that the game is zero-sum.

On the contrary, it highlights no such thing.*

*You may argue that this is the case with regard to that 'assumption' - but you have not proved it.

This need not be the case, for the argument to be correct:

That gains in power for another group that wants to destroy you is necessarily worse for you.

yes - and this is so even if the game isn't zero sum.

This is a nice story, and nicely captures the internal dissonance I feel about cooperating with people who disagree with me about my "pet issue", though like many good stories it's a little simpler and more extreme than what I actually feel.

I like it, but 95% seems surprisingly high. Surely there are plenty of other people out there with a similar psychological makeup to Red and Judge, or to the protagonist (who can at least be convinced, with a sufficient threat, to listen before punching.) But I shouldn't fight the hypothetical too much...

If you also consider the indirect deaths due to the collapse of civilization, I would say that 95% lies within the realm of reason. You don’t need anywhere close to 95% of the population to be fully affected by the scissor to bring about 95% destruction.

Without the statement, it seems unlikely there'd be only 2 answers. So, why not fight the hypothetical? If someone asks 'in a world where 2+2=3, how does math work' I have no answer, without a map of this strange world.

I imagine that the protagonist can be more easily convinced because of the state of the new world; the scissor statement may not as much of an issue in the post-apocalyptic world where there are more important things.

I feel like this story describes very well the compromises that certain religious individuals make, or don't make, regarding abortion.  

I liked this story enough to still remember it, separately from the original Sort By Controversial story.  Trade across moral divide is a useful concept to have handles for.

The claim that scissor statements are dangerous is itself a scissor statement: I think it's obviously false, and will fight you over it. Social interaction is not that brittle. It is important to notice the key ruptures between people's values/beliefs. Disagreements do matter, in ways that sometimes rightly prevent cooperation.

World population is ~2^33, so 33 independent scissor statements would set you frothing in total war of everyone against everyone. Except people are able to fluidly navigate much, much higher levels of difference and complexity than that. Every topic and subculture has fractal disagreements, each battle fiercely fought, and we're basically fine. Is it productive to automatically collaborate on a project with someone who disagrees with your fundamental premises? How should astronomy and astrology best coexist, especially when one of the two is badly out-numbered?

Vigorous, open-ended epistemic and moral competition is hard. Neutrality and collaboration can be useful, but are always context-sensitive and provisional. They are ongoing negotiations, weighing all the different consequences and strategies. A fighting couple can't skip past all the messy heated asymmetric conflicts with some rigid absolutes about civil discourse.

Vigorous, open-ended epistemic and moral competition is hard. Neutrality and collaboration can be useful, but are always context-sensitive and provisional. They are ongoing negotiations, weighing all the different consequences and strategies. A fighting couple can't skip past all the messy heated asymmetric conflicts with some rigid absolutes about civil discourse.

I agree with this. The intended message is not that cooperation is always the right choice, but that monstrous morals alone should not be enough to rule out cooperation. Fighting is still sometimes the best choice.

What are your thoughts on Three Worlds Collide?

This is a beautifully written story. One criticism is that it seems to have a Moral that assumes the Blue cryonicist acts as he does in the story... an entertaining story, but limits the applicability of the Moral. In particular, signing up for cryonics and going out in a blaze of glory are really quite opposite personality traits if you think about it.

Just shows how effective the disagreement is at getting people to care deeply about it I guess

Socially oblivious question: Is this part of that genre of "posts that are secretly about a specific person or employer"? (aka "subtweeting")

Replace blue and green with protestant and catholic, 95% with 60% and what you get is the Thirty Years' War and the beginning of the modern world order.

Up to 60% "in some areas of Germany", Wikipedia says. Considering Europe as a whole, it says 8 million out of about 75 million. But yes, the Thirty Years War ended when the parties finally got it through their heads that neither side would ever win. That beginning of the modern world order was the great agreement to disagree that was the Peace of Westphalia, and even that involved conferences at two different places because the parties couldn't bear to all meet each other together.

Thirty years. One generation. Maybe no-one ever changed their minds, it's just that the ones who grew up with it and then came into power realised that none of it had ever mattered.

Should a country where cryonic preservation is routine try to take over one where it is forbidden?

[-][anonymous]40

Should a country where cryonic preservation is routine try to take over one where it is forbidden?


Or a country where anti-aging medicine delivered in international aid is being stolen and wasted to prevent out-groups from receiving treatment?

 

It's a moderately interesting question though only because our current moral frameworks privilege "do nothing and let something bad happen" over "do something and cause something bad but less bad to happen".  

It's just the Trolley problem restated.  The solve I have for the trolley problem is viewing the agent in front of the lever as a robotic control system.  Every timestep, the control system must output a control packet, on a CAN or rs-485 bus.  There is nothing special or privileged between a packet that says "keep the actuators in their current position" and "move to flip the lever".  

Therefore the trolley problem vanishes from a moral sense.  From a legal sense, a court of law might try to blame the robot, however.

Should a country where cryonic preservation is routine try to take over one where it is forbidden?

Aside from 'yes' or 'no' or 'it depends on X', there are also other actions that can be taken.

I think it's not just that the old generation has died out. It's also that the conflict theorists shut up for a while after such a bloodshed and gave the people like Hugo Grotius a window of opportunity to create the international law.

Similar thing, by the way, happened in Europe after WWII. I've written about it here. I wonder whether this opening of the window of opportunity after a major catastrophe is a common occurrence. If so, working on possible courses of action in advance, so that they can be quickly applied once a catastrophe is over, may be a usful strategy.

Curated. I don't know whether or not LessWrong needs this story–I hope that it doesn't–but I could see that increasingly this is a story that the world benefits from existing. Short, evocative, and with a good point. Curated for your consideration.

[-]jmh40

This largely captures my views about myself and choosing to follow a generally civil life -- accepting that I am not the moral authority, judge and jury even when I find my own moral senses insulted by various actions from others.

I think for me though it's about not even making the choice between blue or green explicitly -- perhaps creating an internal ambiguity that I may well be a monster (when I decide to say eff it all for following civil conventions and laws) rather than the moral person I claim (make the appearance to be) by limiting my actions and let social rules govern various outcomes.

If you follow a law that is grossly unjust because it's a law, or follow a social convention that is grossly unjust because it is a social convention, you would be actively choosing to contribute to that injustice. Sticking out your neck & going against the grain based on your best judgment is (I thought) a kind of rationalist virtue. 

Sticking out your neck is only a virtue if it ends up giving you greater expected utility than following the social norm. Sticking out your neck because you like the idea of yourself as some sort of justice warrior and ruining your entire life for it is the non-rationalist loser's choice.

The point of Johns story is that both Red and Judge are better off working together than they would be if they fought, even though they strongly disagree on the scissor statement. Fighting would in effect be defecting even when the payoff from defection is lower than the payoff from cooperation. This is basically how all of society operates on a daily basis. It's virtually impossible to only cooperate with people who share your exact values unless you choose to live in poverty working for some sort of cult or ineffective commune.

What makes Judge and Red special is that they have a very advanced ability to favor cooperation even when they have a strong emotional gut reaction to defect. And their ability is much greater than that of the general populace who could get along with people just fine over minor disagreements, but couldn't handle disagreeing over the scissor statement.

I think you're confusing rationality for plain self-interest. If you have something to protect, then it may be reasonable to sacrifice personal comfort or even your life to protect it. 

Also, you comment implies that the only reason you'd fight for something other than yourself is out of "liking the idea of yourself as some sort of social justice warrior," as opposed to caring about something and believing you can win by applying some strategy. And saying you'd "ruin your life" implies a set of values by which a life would count as "ruined."

[-]jmh30

Seems like you're rejecting the idea that a "grossly unjust" law could be a scissors statement?

I think it can be both, but I don't have the sense that something being a scissors statement means that one should automatically ignore the scissors statement and strike solely at the person making the statement. Scissors statement or not, if a law is grossly unjust, then resist it.

Scissor statements reveal pre-existing differences in values, they don't create them. There really are people out there who have values that result in them doing terrible things. 

It's one thing when you make that choice for yourself.  This is about a disagreement so heinous that you can't countenance others living according to a different belief than your own.  I read JMH as arguing for a humility that sometimes looks like deferring to the social norm, so that you don't risk forcing your own (possibly wrong) view on others.  I suspect they'd still want to live their life according to their best (flawed) judgment... just with an ever-present awareness that they are almost certainly wrong about some of it, and possibly wrong in monstrous ways.

This is about a disagreement so heinous that you can't countenance others living according to a different belief than your own.

 

Beliefs and values aren't just clothes we wear -- we act on them, and live by them. (And don't confuse me for talking about what people say their values are, vs what they act on. Someone can say they value "liberation for all," for example, but in practice they behave in accordance with the value "might makes right." Even if someone feels bad about it, if that's what they're acting out, over and over again, then that's their revealed preference. In my model, what people do in practice & their intent are what is worth tracking.) So it's reasonable to assume that if someone has a particularly heinous belief, and particularly heinous values, that they act on those beliefs and values.

I read JMH as arguing for a humility that sometimes looks like deferring to the social norm

Why should that particular humility be privileged? In choosing to privilege deference to a social norm or humility over $heinous_thing, one is saying that a {sense of humility|social norm} is more important than the $heinous_thing, and that is a value judgment.

I suspect they'd still want to live their life according to their best (flawed) judgment... just with an ever-present awareness that they are almost certainly wrong about some of it, and possibly wrong in monstrous ways.

If you think your judgment is wrong, you always have the option to learn more and get better judgment. Being so afraid of being wrong that a person will refuse to act is a kind of trap, and I don't think people are acting that way in the rest of their lives. If you're wiring an electrical system for your house, and you have an ever-present awareness that you're almost certainly wrong about some of it, you're not going to keep doing what you're doing. You'll crack open a text book, because dying of electrocution or setting your house on fire is an especially bad outcome, and one you sincerely care about not happening to you. Likewise, if you care about some moral value, if it feels real to you, then you'll act on it.

I read JMH as arguing for a humility

In order to argue for humility, one would have to give the "scissors statement".

your own (possibly wrong) view on others.

Have you read the fictional sequences?

+12

112

222

The rough table above describes a system in which "+" denotes a maximum operator. Applied to (x, x) it returns x, applied to (x, x+1) or (x+1, x) it returns x+1. An operator can be redefined, though a question concerning an unknown system "How does a system where "2+2=3" work?" is open ended, and doesn't contain enough information for a unique solution. It does have enough information for an arbitrary solution like:

"+" takes two inputs and returns their sum (in conventional terms) minus one. This has a number of properties:

a+b = b+a

a+b+c = a+(c+b) = b+(a+c) = b+(c+a) = c+(a+b) = c+(b+a)

That is, order independence. What it 'lacks' relative to the normal definition is distributional invariance, i.e. 1+1+1+1 is different from 2+2 because a more distributed quantity loses more in agglomeration. Though it does have a different 'equivalence' - that is, for a given relationship that can be described in "the original system" a new one can be found.

1+2+2+1 = 2+2

or even (not with integers*)

(*nevermind it is with integers, at least in this case)


x+x+x+x = 2+2

can be solved because in original terms that's 4x-4 = 2+2, which is 4x-4=4, 4x=8, x=2.

2+2+2+2 (new system) = 2+2 (new system)


Even the question itself becomes meaningless wouldn't it?

The question (w.r.t. 2+2=3) is however meaningless in the sense it doesn't describe a system ready at hand which exists, though it could be constructed.

Ah, moral relativism.

It's an understanding that working together is better for both Judge and Red's individual utility functions than is fighting against each other. Call if moral relativism if you want, but it's more accurate to call it a basic level of logical thinking. Rational moral absolutists can agree that it makes no sense for Judge and Red to fight and leave each other either dead or severely injured rather than work together and be significantly better off.

Don't bother trying to escape from here. Nobody ever has, and nobody ever will!