Dulce et Decorum Est Pro Huminatas Moria?
As you might be able to tell from the paraphrased quote I've been taught some bad things that can happen when this is taken too far.
Therefore the important thing is how we, personally, would engage with that decision if it came from outside.
For me it depends on my opinion of the people on the outside. There are four things I weigh:
My likelihood of helping humanity when following orders stems from those considerations. It is a weighty decision.
These look like good criteria, but I wonder how many organizations are satisfactory in this regard. My expectation would be ~0.
The only ones I can think of which are even cognizant of epistemic considerations at the executive level are places like the Federal Reserve and the CDC. I can think of more organizations that think about equilibria, for liberal interpretations of the word, but they are mainly dedicated to preventing us from falling into a worse one (national defense). Moral uncertainty seems like the hardest hurdle to clear; most organizations are explicit in either their amorality or the scope of their morality, and there is very little discretion to change. Happily feedback mechanisms seem to do alright; though I come up short of examples where the feedback mechanisms improve things at the meta level.
All that aside, we can surely start with a simple case and build up from there. Suppose all of these criteria were met to your satisfaction, and a decision was made which was very risky for you personally. How would you think about this? What would you do?
I've been re-reading a sci-fi book which has the interesting Existential Risk scenario where most people are going to die. But some may survive.
If you are a person on earth in the book, you have the choice of helping out people and definitely dieing or trying desperately to be one of the ones to survive (even if you personally might not be the best person to help humanity survive).
In that situation I would definitely be in the "helping people better suited for surviving" camp. Following orders because the situation was too complex to keep in one persons head. Danger is fine because you are literally a dead person walking.
It becomes harder when the danger isn't so clear and present. I'll think about it a bit more.
The title of the book is frirarirf (rot13)
This is a follow-up to Death in Groups. That post in short:
The nominal goal was to find a lost shotgun. I provided an approximate price for the shotgun, which seems to have been a mistake - several comments zeroed in on that figure with a view to utility calculations, which I should absolutely have expected considering the audience. Happily (for me), despite ascribing the opposite meaning that I intended from that section, there was broad agreement about the conclusion I wanted. This is best encapsulated in Raemon's comment:
The shotgun was meaningless. That it could be had for $650 does not mean that is the value to the military, it means you personally could have it for $650. A majority of the readers here are American, and shotguns are among the least-regulated firearms in a culture that does not regulate firearms very much. How long would it take you to get to the nearest hunting/fishing store, or gun store, or Walmart? 20 minutes? That means the majority of readers could, on a whim, roll out to buy one and walk out of the store with it. A lot of you could get this done over your lunch break. There is not very much overlap in the Venn Diagram of militarily significant and every citizen over 18 can do it during lunch, thankfully.
Returning to Raemon's comment, interpretation 1 is closer to the truth (Full disclosure: the institution of the Army is content with people who embody interpretation 2. We can revisit why at a later time). I put to you another interpretation:
3. Non-hostile interpretation: just doing what they did the last time, without considering the consequences.
Same action, different circumstances
As it turns out, what they had us do in this case is what would normally be done. Losing weapons is very rare, but the fact is that there are a lot of weapons in the military and losing a few is inevitable: I had been through this on deployment before.
What was the same: same unit, lost weapon, on deployment. What was different: in Iraq rather than Afghanistan; different chain of command; on base instead of on patrol. On this occasion a member of a different platoon had left a pistol laying on their bunk when they went out on patrol, and when they came back it was gone. The punishment was for everyone to search, and so for 36 hours or so we were continuously awake, searching vehicles, and shipping containers, and toilets, and stacks of tires, and garbage piles, and dividing up the bare stones into grids and searching those. This also entailed certain risks - there was no change in the patrol schedule or duties, it was just that everyone had to do those without any sleep during this period. It is also true that a pistol is militarily meaningless. But it is something of a cultural trait in the 82nd Airborne, effectively a Standard Operating Procedure. For some intuition as to why, we can consider a counterfactual case that is very similar.
Counterfactual: same action, same circumstances, justified
The personal weapons we carry are far from the only thing we have on us, and apart from personal effects probably the least important. One of the more important items we are issued are night vision goggles, which are of military significance because they allow us to operate with only a little less efficiency than during the day. The enemy has no such equipment and so is much less efficient, which provides us a big marginal advantage. Even one pair of NVGs is a significant force multiplier for insurgents: for example, it would allow them to make accurate mortar attacks on a base, or make an effective night-time ambush. In this case, with the same on-patrol situation as in the previous post, the risk to personnel would be widely seen as acceptable because we are trading the risk of a road-side ambush now for the risk that all future attacks they choose to launch are more damaging than before.
I can vouch that the operational significance of this fact does have an impact on the chain of command's decisions (though I note these were not the same personnel in command). During the same deployment to Iraq as the previous on-base example, a pair of NVGs was briefly lost. In contrast to the lost weapon incidents, all traffic into and out of the base was shut down, and all persons not actively conducting the search were told to stay where they were. The goggles were recovered in a few hours.
Allowing for Error
So far I gave examples of doing a thing when it is risky and stupid, when it is less risky and less stupid, and when it is risky but worth it. What about a case where the thing is deemed worth the risk, but there is an error in the thing itself? What would you do then? This is probably the most famous case in the English language:
This is from the narrative poem by Alfred, Lord Tennyson. For those unfamiliar, during the Crimean War the British commander wanted to send a unit of light cavalry to harass some Russians trying to remove guns from overrun artillery positions. Somewhere down the chain of command the order was botched; they were sent into the teeth of a prepared artillery position instead. I would repeat part of that stanza from time to time, while I was in. I wasn't a part of any operations where a communication failure resulted in casualties though - probably largely owing to this event - but if I had been, I would have gone just the same.
This concludes the examples.
What it isn't and what it is
In the gambling of lives therefore we have routine risks and taking them may be good decisions, may be neutral decisions, may be stupid - or possibly lazy - decisions, and may be any of the above but turn out even worse due to mistakes. If each mission were an expected benefit calculation, we would have to consider the likelihood that each of these were the case.
Each mission is not an expected benefit calculation.
Years back we read that our intuitions are only what an algorithm feels like from the inside. But that was for thinking-about-your-thinking; when you are part of a larger system, you no longer embody the algorithm. You may not even understand the algorithm, assuming there is one. What happens instead is there is a starting point, and the goal; connecting the two is analogous to integration. Did you ever wonder what an infinitesimal felt like from the inside? (Frustration. Fear. Resolve.) That is what lives are gambled for - infinitesimal progress to the goal.
That lives may be sacrificed in pursuit of a goal is not new. It still doesn't answer the question of why anyone would agree to be sacrificed, though.
But why though
I usually consider my veterancy to entail two major psychological adjustments from civilian life: the first was the basic training for becoming a soldier; the second was the experience of deployment and combat. Basic training works by a combination of controlled traumas and indoctrination, which produces common knowledge. The purpose of this common knowledge is to promote group fitness. The group needs to succeed in conditions of adversarial selection. Soldiering: common knowledge for group fitness in adversarial selection.
Returning to Raemon's comment again, note the focus on hierarchy in 1 and 2. I expect most of the readers imagine something like this. The purpose of the hierarchy is information management; the information we got/needed was not at issue here. What is important is that if one of us went, all of us did. The core mechanism is more like this. This is precommitment in action; to obedience yes, but above all to the group. Commitment to the group is maintained even in the face of stupidity or catastrophic mistake. In this way the group will almost certainly survive, even though its members do not. This is very important, because the group's action is the mechanism of goal advancement.
Conclusion: why does this matter?
Because this is the key to success in extremis.
The military's job is weighing lives against one another, under conditions of full-spectrum opposition by intelligent agents. The military chooses groups as the instrumentally important unit for planning and action. X-risk and EA are also decisions about how to weigh lives; at present I think we are under-emphasizing instrumental groups in that analysis.
It is also true that we spend a lot of time on how to make decisions. This is right and good; but when we think about decisions, we almost always put ourselves in the position of the decision-maker. Most of us are not in the position of decision-maker for the large decisions. Therefore the important thing is how we, personally, would engage with that decision if it came from outside. The basic trolley problem asks whether you would throw the switch to send the trolley down the track with one person on it rather than four. I say the fundamental trolley problem is this: if you are alone on the tracks and see a trolley hurtling towards you, do you accept that decision?
I expect this to be a hard sell. Self-preservation is a deep and pervasive bias, for good reason. It also sits firmly beneath the level of analytical thought, which is a main driver of why neither this nor the last post are arranged in the customary fashion.