First of all, it's not clear that individual apparently non-Pareto-optimal actions in isolation are evidence of irrationality or non-Pareto optimal behavior on a larger scale. This is particularly often the case when the "lose-lose" behavior involves threats, commitments, demonstrating willingness to carry through, etc
Second of all, "someone who panics at the idea they might give the enemy too much" implies, or at least leaves open, the possibility that the ultimate concern is losing something ultimately valuable that is being given, rather than the ultimate goal being the defeat of the enemies. Likewise "demand extremely lopsided treaties if they're willing to negotiate at all", which implies strongly that they are seeking something other than the defeat of foes.
When someone considers themselves enlightened for saying "Oh, I'm not like my friends. They want them all to die. I just want them to go away and leave us alone.".
One point of mine is that this "enlightened" statement may actually be the extrapolated volition of even those who think they "want them all to die". It's pretty clear how for the "enlightened" person, the unenlightened value set could be instrumentally useful.
Most of all, war was characterized as being something that had the ultimate/motivating goal of defeating enemies. I object that it isn't, but please recognize I go far beyond what I would need to assert to show that when I ask for examples of war ever being something driven by the ultimate goal of defeating enemies. Showing instances in which wars followed the pattern would only be the beginning of showing war in general is characterized by that goal.
I similarly would protest if someone said "the result of addition is the production of prime numbers, it is the defining characteristic of addition". I would in that case not ask for counterexamples, but would use other methods to show that no, that isn't a defining characteristic of addition nor is it the best way to talk about addition. Of course, some addition does result in prime numbers.
I agree there could be such a war, but I don't know that there have ever been any, and highlighting this point is an attempt to at least show that any serious doubt can only be about whether war ever is characterized by having the ultimate goal of defeating enemies; there can be no doubt that war in general does not have as its motivating goal the defeat of one's enemies.
I am aware of ignoring threats, using uncompromisable principles to get an advantage in negotiations, breaking your receiver to decide on a meeting point, breaking your steering wheel to win at Chicken, etc. I am also aware of the theorem that says even if there is a mutually beneficial trade, there are cases where selfish rational agents refuse to trade, and that the theorem does not go away when the currency they use is thousands of lives. I still claim that the type of war I'm talking about doesn't stem from such calculations; that people on side A are ...
Many people think you can solve the Friendly AI problem just by writing certain failsafe rules into the superintelligent machine's programming, like Asimov's Three Laws of Robotics. I thought the rebuttal to this was in "Basic AI Drives" or one of Yudkowsky's major articles, but after skimming them, I haven't found it. Where are the arguments concerning this suggestion?