Wiki Contributions

Comments

Sorted by
Gentzel134

My model of the problem boils down to a few basic factors:

  1. Attention competition prompts speed and rewards some degree of imprecision and controversy with more engagement.
  2. It is difficult to comply with many costly norms and to have significant output/win attention competitions.
  3. There is debate over which norms should be enforced, and while getting the norms combination right is positive-sum overall, different norms favor different personalities in competition.
  4. Just purging the norm breakers can create substantial groupthink if the norm breakers disproportionately express neglected ideas or comply with other neglected and costly but valuable norms.
  5. It is costly for 3rd parties to adjudicate and intervene precisely in conflicts involving attention competition, since they are inherently costly to sort out.

General recommendations/thoughts:

  1. Slow the pace of conversation, perhaps through mod rate limits on comment length and frequency or temporary bans. This seems like a proportional response to argument spam and attention competition, and would seem to push toward better engagement incentives without inducing groupthink from overzealous censorship.
  2. If entangled in comment conflict yourself, aim to write more carefully, clearly, and in a condensed manner that is more inherently robust against adversarial misinterpretation. If the other side doesn't reciprocate, make your effort explicit to reduce the social cost of unilaterally not responding quickly (e.g. leaving a friendly temporary comment about responding later when you get time to convey your thoughts clearly).
  3. To the degree possible, reset and focus on conversations going forward, not publicly adjudicating who screwed-up what in prior convos. While it is valuable to set norms, those who are intertwined in conflict and stand to competitively benefit from the selective enforcement of the norms they favor are inherently not credible as sources of good norm sets.

In general we should be aiming for positive-sum and honest incentives, while economizing in how we patch exploits in the norms that are promoted and enforced. Attention competition makes this inherently hard, thus it makes sense to attack the dynamic itself.

I am not sure that is actually true. There are many escalatory situations, border clashes, and mini-conflicts that could easily lead to far larger scale war, but don't due to the rules and norms that military forces impose on themselves and that lead to de-escalation. Once there is broader conflict though between large organizations, then yes you often do often need a treaty to end it.

Treaties don't work on decentralized insurgencies though and hence forever wars: agreements can't be credibly enforced when each fighter has their own incentives and veto power. This is an area where norm spread can be helpful, and I do think online discourse is currently far more like waring groups of insurgents than waring armies.

Why would multi-party conflict change the utility of the rules? It does change the ease of enforcement, but that's the reason to start small and scale until the advantages of cooperating exceed the advantages of defecting. That how lots of good things develop where cooperation is hard.

The dominance of in-group competition seems like the sort of thing that is true until it isn't. Group selection is sometimes slow, but that doesn't mean it doesn't exist. Monopolies have internal competition problems, while companies on a competitive market do get forced to develop better internal norms for cooperation, or they risk going out of business against competitors that have achieve higher internal alignment via suppressing internal zero-sum competition (or re-aligned it in a positive-sum manner for the company).

I don't think you are fully getting what I am saying, though that's understandable because I haven't added any info on what makes a valid enemy.

I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.

Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can't align those interests it isn't the end of the world. The big problem is lies and sloppy reasoning that go beyond defending one's own interests into causing unnecessary collateral damage for large groups. The entire framework here is premised on the same distinction you seem to think I don't have in mind... which is fair because it was unstated. XD

The big focus is a form of cooperation between enemies to reduce large scale indiscriminate collateral damage of dishonesty. It is easier to start this cooperation between actors that are relatively more aligned, before scaling to actors that are relatively less aligned with each other. Do you sense any floating disagreements remaining?

That's totally fair for LessWrong, haha. I should probably try to reset things so my blog doesn't automatically post here except when I want it to.

I agree with this line of analysis. Some points I would add:

-Authoritarian closed societies probably have an advantage at covert racing, at devoting a larger proportion of their economic pie to racing suddenly, and at artificially lowering prices to do so. Open societies have probably a greater advantage at discovery/the cutting edge and have a bigger pie in the first place (though better private sector opportunities compete up the cost of defense engineering talent). Given this structure, I think you want the open societies to keep their tech advantage, and make deployment/scaling military tech a punishment for racing by closed societies. -Your first bullet seems similar to the situation the U.S. is in now, Russia and China just went through a modernization wave, and Russia has been doing far more nuclear experimentation while the U.S. talent for this is mostly old or already retired + a lot of the relevant buildings are falling apart. Once you are in the equilibrium of knowing a competitor is doing something and your decision is to match or not, you don't have leverage to stop the competitor unless you get started. Because of how old a lot of U.S. systems are/how old the talent is, Russia likely perceived a huge advantage to getting the U.S. to delay. A better structure for de-escalation is neutral with respect to relative power differences: if you de-escalate by forfeiting relative power you keep increasing the incentive for the other side to race.

There are some other caveats I'm not getting into here, but I think we are mostly on the same page.

Some of the original papers on nuclear winter reference this effect, e.g. in the abstract here about high yield surface burst weapons (e.g. I think this would include the sort that would have been targeted at silos by the USSR). https://science.sciencemag.org/content/222/4630/1283

A common problem with some modern papers is that they just take soot/dust amounts from these prior papers without adjusting for arsenal changes or changes in fire modeling.

This is what the non-proliferation treaty is for. Smaller countries could already do this if they want, as they aren't treaty limited in terms of the number of weapons they make, but getting themselves down the cost curve wouldn't make export profitable or desirable because they have to eat the cost of going down the cost curve in the first place and no one that would only buy cheap nukes is going to compensate them for this. Depending on how much data North Korea got from prior tests, they might still require a lot more testing, and they certainly require a lot more nuclear material which they can't get cheaply. Burning more of their economy to get down the cost curve isn't going to enable them to export profitably, and if they even started it could be the end of the regime (due to overmatch by U.S. + Korea + Japan). The "profit" they get from nukes is in terms of regime security and negotiating power... they aren't going to throw those in the trash. They might send scientists, but they aren't going to give away free nukes, or no one is going to let planes or ships leave their country without inspection for years. The Cuban missile crisis was scary for the U.S. and USSR, but a small state making this sort of move against the interest of superpowers is far more likely to invite an extreme response (IMO). 

I generally agree with this thought train of concern. That said, if the end state equilibrium is large states have counterforce arsenals and only small states have multi-megaton weapons, then I think that equilibrium is safer in terms of expected death because the odds of nuclear winter are so much lower.

There will be risk adaptation either way. The risk of nuclear war may go up contingent on their being a war, but the risk of war may go down because there are lower odds of being able to keep war purely conventional. I think that makes assessing the net risk pretty hard, but I doubt you'd argue for turning every nuke into a civilization ender to improve everyone's incentives: at some point it just isn't credible that you will use the weapons and this reduces their detergent effect. There is an equilibrium that minimizes total risk across sources of escalation, accidents, etc. and I'm trying to spark convo toward figuring out what that equilibrium is. I think as tech changes, the best equilibrium is likely to change, and it is unlikely to be the same arms control as decades ago, but I may be wrong about the best direction of change.

Precision isn't cheap. Low yield accurate weapons will often be harder to make than large yield inaccurate weapons. A rich country might descend the cost curve in production, but as long the U.S. stays in an umbrella deterrence paradigm that doesn't decrease costs for anyone else, because we don't export nukes.

This also increases the cost for rogue states to defend their arsenals (because they are small, don't have a lot of area to hide stuff, etc.), which may discourage them from gaining them in the first place.

Load More