No peace in our time?
There's a new paper arguing, contra Pinker, that the world is not getting more peaceful:
On the tail risk of violent conflict and its underestimation
Pasquale Cirillo and Nassim Nicholas Taleb
Abstract—We examine all possible statistical pictures of violent conflicts over common era history with a focus on dealing with incompleteness and unreliability of data. We apply methods from extreme value theory on log-transformed data to remove compact support, then, owing to the boundedness of maximum casualties, retransform the data and derive expected means. We find the estimated mean likely to be at least three times larger than the sample mean, meaning severe underestimation of the severity of conflicts from naive observation. We check for robustness by sampling between high and low estimates and jackknifing the data. We study inter-arrival times between tail events and find (first-order) memorylessless of events. The statistical pictures obtained are at variance with the claims about "long peace".
Every claim in the abstract is supported by the data - with the exception of the last claim. Which is the important one, as it's the only one really contradicting the "long peace" thesis.
Most of the paper is an analysis of trends in peace and war that establish that what we see throughout conflict history is consistent with a memoryless powerlaw process whose mean we underestimate from the sample. That is useful and interesting.
However, the paper does not compare the hypothesis that the world is getting peaceful with the alternative hypothesis that it's business as usual. Note that it's not cherry-picking to suggest that the world might be getting more peaceful since 1945 (or 1953). We've had the development of nuclear weapons, the creation of the UN, and the complete end of direct great power wars (a rather unprecedented development). It would be good to test this hypothesis; unfortunately this paper, while informative, does not do so.
The only part of the analysis that could be applied here is the claim that:
For an events with more than 10 million victims, if we refer to actual estimates, the average time delay is 101.58 years, with a mean absolute deviation of 144.47 years
This could mean that the peace since the second world war is not unusual, but could be quite typical. But this ignores the "per capita" aspect of violence: the more people, the more deadly events we expect at same per capita violence. Since the current population is so much larger than it's ever been, the average time delay is certainly lower that 101.58 years. They do have a per capita average time delay - table III. Though this seems to predict events with 10 million casualties (per 7.2 billion people) every 37 years or so. That's 3.3 million casualties just after WW2, rising to 10 million today. This has never happened so far (unless one accepts the highest death toll estimate of the Korean war; as usual, it is unclear whether 1945 or 1953 was the real transition).
This does not prove that the "long peace" is right, but at least shows the paper has failed to prove it wrong.
Summary and Lessons from "On Combat"
On Combat - The Psychology and hysiology of Deadly Conflict in War and in Peace by Lt. Col. Dave Grossman and Loren W. Christensen (third edition from 2007) is a well-written, evidence-based book about the reality of human behaviour in life-threatening situations. It is comprehensive (400 pages), provides detailed descriptions, (some) statistics as well as first-person recounts, historical context and other relevant information. But my main focus in this post is in the advice it gives and what lessons the LessWrong community may take from it.
TL;DR
In deadly force encounters you will experience and remember the most unusual physiological and psychological things. Innoculate yourself against extreme stress with repeated authentic training; play win-only paintball, train 911-dialing and -reporting. Train combat breathing. Talk to people after traumatic events.
Link: Poking the Bear (Podcast)
A Dan Carlin Podcast about how the United States is foolishly antagonizing the Russians over Ukraine. Carlin makes an analogy as to how the United States would feel if Russia helped overthrow the government of Mexico to install an anti-American government under conditions that might result in a Mexican civil war. Because of the Russian nuclear arsenal, even a tiny chance of a war between the United States and Russia has a huge negative expected value.
Donation tradeoffs in conscientious objection
Suppose that you believe larger scale wars than current US military campaigns are looming in the next decade or two (this may be highly improbable, but let's condition on it for the moment). If you thought further that a military draft or other forms of conscription might be used, and you wanted to avoid military service if that situation arose, what steps should you take now to give yourself a high likelihood of being declared a conscientious objector?
I don't have numbers to back any of this up, but I am in the process of compiling them. My general thought is to break down the problem like so: Pr(serious injury or death | conscription) * Pr(conscription | my conscientious objector behavior & geopolitical conditions ripe for war) * Pr(geopolitical conditions ripe for war), assuming some conscientious objector behavior (or mixture distribution over several behaviors).
If I feel that Pr(serious injury or death | conscription) and Pr(geopolitical conditions ripe for war) are sufficiently high, then I might be motivated to pay some costs in order to drive Pr(conscription | my conscientious objector behavior) very low.
There's a funny bit in the American version of the show The Office where the manager, Michael, is concerned about his large credit card debt. The accountant, Oscar, mentions that declaring bankruptcy is an option, and so Michael walks out into the main office area and yells, "I DECLARE BANKRUPTCY!"
In a similar vein, I don't think that draft boards will accept the "excuse" that a given person has "merely" frequently expressed pacifist views. So if someone wants to robustly signal that she or he is a conscientious objector, what to do? In my ~30 minutes of searching, I've found a few organizations that, on first glance, look worthy of further investigation and perhaps regular donations.
Here are the few I've focused on most:
The problems I'm thinking about along these lines include:
- Whether or not the donation cost is worth it. There's no Giving What We Can type measure for this as far as I can tell, and even though I know from family experience that veteran mental illness can be very bad, I'm not convinced that donations to the above organizations provide a lot of QALY bang for the buck.
- Another component of bang for the buck is how much the donation will credibly signal that I actually am a serious conscientious objector. If I donate and then a draft board chooses to ignore it, it would be totally wasted. But if I think that 'going to war' is highly correlated with very significant negative outcomes, then just as with cryonics, I might feel that such costs are worth it even for a small probability of avoiding a combat environment.
- Even assuming that I resolve 1 & 2, there's the problem of trading off these donations with other donations that I make. In a self-interest line of thinking, I might forego my current donations to places like SIAI or Against Malaria because, good as those are, they may not offer the same shorter term benefits to me as purchasing a conscientious objector signal.
I'm curious if others have thought about this. Good literature references are welcome. My plan is to compile statistics that let me make reasonable estimates of the different conditional probabilities.
Addendum
Several people seem very concerned with the signal faker aspect of this question. I don't understand the preoccupation with this and feel tired of trying to justify the question to people who only care about the signal faker aspect. So I'll just add this copy of one of my comments from below. Hopefully this gives some additional perspective, though I don't expect it to change anyone's mind. I still stand by the post as-is: it's asking about a conditional question based on sincere belief. Even if the answer would be of interest to fakers too, that alone doesn't make that explanation more likely and even if that explanation was more likely it doesn't make the question unworthy of thoughtful answers.
Here's the promised comment:
... my question is conditional. Assume that you already sincerely believe in conscientious objection, in the sense of personal ideology such that you could describe it to a draft board. Now that we're conditioning on that, and we assume already that your primary goal is to avoid causing harm or death... then further ask what behaviors might be best to generate the kinds of signals that will work to convince a draft board. Merely having actual pacifist beliefs is not enough. Someone could have those beliefs but then do actions that poorly communicate them to a draft board. Someone else could have those beliefs and do behaviors that more successfully communicate them to draft boards. And to whatever extent there are behaviors outside of the scope of just giving an account of one's ideology I am asking to analyze the effectiveness.
I really think my question is pretty simple. Assume your goal is genuine pacifism but that you're worried this won't convince a draft board. What should you do? Is donation a good idea? Yes, these could be questions a faker would ask. So what? They could also be questions a sincere person would ask, and I don't see any reason for all the downvoting or questions about signal faking. Why not just do the thought experiment where you assume that you are first a sincere conscientious objector and second a person concerned about draft board odds?
Stated another way:
1) Avoiding combat where I cause harm or death is the first priority, so if I have to go to jail or shoot myself in the foot to avoid it, so be it and if it comes to that, it's what I'll do. This is priority number one.
2) I can do things to improve my odds of never needing to face the situation described in (1) and to the extent that the behaviors are expedient (in a cost-benefit tradeoff sense) to do in my life, I'd like to do them now to help improve odds of (1)-avoidance later. Note that this in no way conflicts with being a genuine pacifist. It's just common sense. Yes, I'll avoid combat in costly ways if I have to. But I'd also be stupid to not even explore less costly ways to invest in combat-avoidance that could be better for me.
3) To the extent that (2) is true, I'd like to examine certain options, like donating to charities that assist with legal issues in conscientious objection, or which extend mental illness help to affected veterans, for their efficacy. There is still a cost to these things and given my conscientious objection preferences, I ought to weigh that cost.
Mini advent calendar of Xrisks: nuclear war
The FHI's mini advent calendar: counting down through the big five existential risks. The first one is an old favourite, forgotten but not gone: nuclear war.
Nuclear War
Current understanding: medium-high
Most worrying aspect: the missiles and bombs are already out there
It was a great fear during the fifties and sixties; but the weapons that could destroy our species lie dormant, not destroyed.
But nuclear weapons still remain the easiest method for our species to destroy itself. Recent modelling have confirmed the old idea of nuclear winter: soot rising from burning human cities destroyed by nuclear weapons could envelop the world in a dark cloud, disrupting agriculture and the food supplies, and causing mass starvation and death far beyond the areas directly hit. And a creeping proliferation has spread these weapons to smaller states in unstable areas of the world, increasing the probability that nuclear weapons could get used, leading to potential escalation. The risks are not new, and several times (the Cuban missile crisis, the Petrov incident) our species has been saved from annihilation by the slimmest of margins. And yet the risk seems to have slipped off the radar for many governments: emergency food and fuel reserves are diminishing, and we have few “refuges” designed to ensure that the human species could endure a major nuclear conflict.
Future of Humanity?
I first attempted to post this in 2009, but bounced off the karma wall. Since then, MY forgetfulness and procrastination have been its nemesis.
I invite you to listen (read) in an unusual way. "Consider it": think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I really weaken my ability to "pan for the gold".
Remember the Swamp!
http://en.wiktionary.org/wiki/when_you're_up_to_your_neck_in_alligators,_it's_easy_to_forget_that_the_initial_objective_was_to_drain_the_swamp
I looked over the tag cloud and didn't see:
- Existential Risk
- War
- Aggression
- Competitveness
- Territorialism
- Nuclear arsenals
Rational insanity
My theory on why North Korea has stepped up its provocation of South Korea since their nuclear missle tests is that they see this as a tug-of-war.
Suppose that North Korea wants to keep its nuclear weapons program. If they hadn't sunk a ship and bombed a city, world leaders would currently be pressuring North Korea to stop making nuclear weapons. Instead, they're pressuring North Korea to stop doing something (make provocative attacks) that North Korea doesn't really want to do anyway. And when North Korea (temporarily) stops attacking South Korea, everybody can go home and say they "did something about North Korea". And North Korea can keep on making nukes.
The prior probability of justification for war?
Could you use Bayes Theorem to figure out whether or not a given war is just?
If so, I was wondering how one would go about estimating the prior probability that a war is just.
Thanks for any help you can offer.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)