Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bayesians vs. Barbarians

51 Post author: Eliezer_Yudkowsky 14 April 2009 11:45PM

Previously in seriesCollective Apathy and the Internet
Followup toHelpless Individuals

Previously:

Let's say we have two groups of soldiers.  In group 1, the privates are ignorant of tactics and strategy; only the sergeants know anything about tactics and only the officers know anything about strategy.  In group 2, everyone at all levels knows all about tactics and strategy.

Should we expect group 1 to defeat group 2, because group 1 will follow orders, while everyone in group 2 comes up with better ideas than whatever orders they were given?

In this case I have to question how much group 2 really understands about military theory, because it is an elementary proposition that an uncoordinated mob gets slaughtered.

Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.

Now there's a certain viewpoint on "rationality" or "rationalism" which would say something like this:

"Obviously, the rationalists will lose.  The Barbarians believe in an afterlife where they'll be rewarded for courage; so they'll throw themselves into battle without hesitation or remorse.  Thanks to their affective death spirals around their Cause and Great Leader Bob, their warriors will obey orders, and their citizens at home will produce enthusiastically and at full capacity for the war; anyone caught skimming or holding back will be burned at the stake in accordance with Barbarian tradition.  They'll believe in each other's goodness and hate the enemy more strongly than any sane person would, binding themselves into a tight group.  Meanwhile, the rationalists will realize that there's no conceivable reward to be had from dying in battle; they'll wish that others would fight, but not want to fight themselves.  Even if they can find soldiers, their civilians won't be as cooperative:  So long as any one sausage almost certainly doesn't lead to the collapse of the war effort, they'll want to keep that sausage for themselves, and so not contribute as much as they could.  No matter how refined, elegant, civilized, productive, and nonviolent their culture was to start with, they won't be able to resist the Barbarian invasion; sane discussion is no match for a frothing lunatic armed with a gun.  In the end, the Barbarians will win because they want to fight, they want to hurt the rationalists, they want to conquer and their whole society is united around conquest; they care about that more than any sane person would."

War is not fun.  As many many people have found since the dawn of recorded history, as many many people have found out before the dawn of recorded history, as some community somewhere is finding out right now in some sad little country whose internal agonies don't even make the front pages any more.

War is not fun.  Losing a war is even less fun.  And it was said since the ancient times:  "If thou would have peace, prepare for war."  Your opponents don't have to believe that you'll win, that you'll conquer; but they have to believe you'll put up enough of a fight to make it not worth their while.

You perceive, then, that if it were genuinely the lot of "rationalists" to always lose in war, that I could not in good conscience advocate the widespread public adoption of "rationality".

This is probably the dirtiest topic I've discussed or plan to discuss on LW.  War is not clean.  Current high-tech militaries—by this I mean the US military—are unique in the overwhelmingly superior force they can bring to bear on opponents, which allows for a historically extraordinary degree of concern about enemy casualties and civilian casualties.

Winning in war has not always meant tossing aside all morality.  Wars have been won without using torture.  The unfunness of war does not imply, say, that questioning the President is unpatriotic.  We're used to "war" being exploited as an excuse for bad behavior, because in recent US history that pretty much is exactly what it's been used for...

But reversed stupidity is not intelligence.  And reversed evil is not intelligence either.  It remains true that real wars cannot be won by refined politeness.  If "rationalists" can't prepare themselves for that mental shock, the Barbarians really will win; and the "rationalists"... I don't want to say, "deserve to lose".  But they will have failed that test of their society's existence.

Let me start by disposing of the idea that, in principle, ideal rational agents cannot fight a war, because each of them prefers being a civilian to being a soldier.

As has already been discussed at some length, I one-box on Newcomb's Problem.

Consistently, I do not believe that if an election is settled by 100,000 to 99,998 votes, that all of the voters were irrational in expending effort to go to the polling place because "my staying home would not have affected the outcome".  (Nor do I believe that if the election came out 100,000 to 99,999, then 100,000 people were all, individually, solely responsible for the outcome.)

Consistently, I also hold that two rational AIs (that use my kind of decision theory), even if they had completely different utility functions and were designed by different creators, will cooperate on the true Prisoner's Dilemma if they have common knowledge of each other's source code.  (Or even just common knowledge of each other's rationality in the appropriate sense.)

Consistently, I believe that rational agents are capable of coordinating on group projects whenever the (expected probabilistic) outcome is better than it would be without such coordination.  A society of agents that use my kind of decision theory, and have common knowledge of this fact, will end up at Pareto optima instead of Nash equilibria.  If all rational agents agree that they are better off fighting than surrendering, they will fight the Barbarians rather than surrender.

Imagine a community of self-modifying AIs who collectively prefer fighting to surrender, but individually prefer being a civilian to fighting.  One solution is to run a lottery, unpredictable to any agent, to select warriors.  Before the lottery is run, all the AIs change their code, in advance, so that if selected they will fight as a warrior in the most communally efficient possible way—even if it means calmly marching into their own death.

(A reflectively consistent decision theory works the same way, only without the self-modification.)

You reply:  "But in the real, human world, agents are not perfectly rational, nor do they have common knowledge of each other's source code.  Cooperation in the Prisoner's Dilemma requires certain conditions according to your decision theory (which these margins are too small to contain) and these conditions are not met in real life."

I reply:  The pure, true Prisoner's Dilemma is incredibly rare in real life.  In real life you usually have knock-on effects—what you do affects your reputation.  In real life most people care to some degree about what happens to other people.  And in real life you have an opportunity to set up incentive mechanisms.

And in real life, I do think that a community of human rationalists could manage to produce soldiers willing to die to defend the community.  So long as children aren't told in school that ideal rationalists are supposed to defect against each other in the Prisoner's Dilemma.  Let it be widely believed—and I do believe it, for exactly the same reason I one-box on Newcomb's Problem—that if people decided as individuals not to be soldiers or if soldiers decided to run away, then that is the same as deciding for the Barbarians to win.  By that same theory whereby, if a lottery is won by 100,000 votes to 99,998 votes, it does not make sense for every voter to say "my vote made no difference".  Let it be said (for it is true) that utility functions don't need to be solipsistic, and that a rational agent can fight to the death if they care enough about what they're protecting.  Let them not be told that rationalists should expect to lose reasonably.

If this is the culture and the mores of the rationalist society, then, I think, ordinary human beings in that society would volunteer to be soldiers.  That also seems to be built into human beings, after all.  You only need to ensure that the cultural training does not get in the way.

And if I'm wrong, and that doesn't get you enough volunteers?

Then so long as people still prefer, on the whole, fighting to surrender; they have an opportunity to set up incentive mechanisms, and avert the True Prisoner's Dilemma.

You can have lotteries for who gets elected as a warrior.  Sort of like the example above with AIs changing their own code.  Except that if "be reflectively consistent; do that which you would precommit to do" is not sufficient motivation for humans to obey the lottery, then...

...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away.  Even considering that we ourselves might be selected in the lottery.  Because in advance of the lottery, this is the general policy that gives us the highest expectation of survival.

...like I said:  Real wars = not fun, losing wars = less fun.

Let's be clear, by the way, that I'm not endorsing the draft as practiced nowadays.  Those drafts are not collective attempts by a populace to move from a Nash equilibrium to a Pareto optimum.  Drafts are a tool of kings playing games in need of toy soldiers. The Vietnam draftees who fled to Canada, I hold to have been in the right.  But a society that considers itself too smart for kings, does not have to be too smart to survive.  Even if the Barbarian hordes are invading, and the Barbarians do practice the draft.

Will rational soldiers obey orders?  What if the commanding officer makes a mistake?

Soldiers march.  Everyone's feet hitting the ground in the same rhythm.  Even, perhaps, against their own inclinations, since people left to themselves would walk all at separate paces.  Lasers made out of people.  That's marching.

If it's possible to invent some method of group decisionmaking that is superior to the captain handing down orders, then a company of rational soldiers might implement that procedure.  If there is no proven method better than a captain, then a company of rational soldiers commit to obey the captain, even against their own separate inclinations.  And if human beings aren't that rational... then in advance of the lottery, the general policy that gives you the highest personal expectation of survival is to shoot soldiers who disobey orders.  This is not to say that those who fragged their own officers in Vietnam were in the wrong; for they could have consistently held that they preferred no one to participate in the draft lottery.

But an uncoordinated mob gets slaughtered, and so the soldiers need some way of all doing the same thing at the same time in the pursuit of the same goal, even though, left to their own devices, they might march off in all directions.  The orders may not come from a captain like a superior tribal chief, but unified orders have to come from somewhere.  A society whose soldiers are too clever to obey orders, is a society which is too clever to survive.  Just like a society whose people are too clever to be soldiers.  That is why I say "clever", which I often use as a term of opprobrium, rather than "rational".

(Though I do think it's an important question as to whether you can come up with a small-group coordination method that really genuinely in practice works better than having a leader.  The more people can trust the group decision method—the more they can believe that it really is superior to people going their own way—the more coherently they can behave even in the absence of enforceable penalties for disobedience.)

I say all this, even though I certainly don't expect rationalists to take over a country any time soon, because I think that what we believe about a society of "people like us" has some reflection on what we think of ourselves.  If you believe that a society of people like you would be too reasonable to survive in the long run... that's one sort of self-image.  And it's a different sort of self-image if you think that a society of people all like you could fight the vicious Evil Barbarians and win—not just by dint of superior technology, but because your people care about each other and about their collective society—and because they can face the realities of war without losing themselves—and because they would calculate the group-rational thing to do and make sure it got done—and because there's nothing in the rules of probability theory or decision theory that says you can't sacrifice yourself for a cause—and because if you really are smarter than the Enemy and not just flattering yourself about that, then you should be able to exploit the blind spots that the Enemy does not allow itself to think about—and because no matter how heavily the Enemy hypes itself up before battle, you think that just maybe a coherent mind, undivided within itself, and perhaps practicing something akin to meditation or self-hypnosis, can fight as hard in practice as someone who theoretically believes they've got seventy-two virgins waiting for them.

Then you'll expect more of yourself and people like you operating in groups; and then you can see yourself as something more than a cultural dead end.

So look at it this wayJeffreyssai probably wouldn't give up against the Evil Barbarians if he were fighting alone.  A whole army of beisutsukai masters ought to be a force that no one would mess with.  That's the motivating vision.  The question is how, exactly, that works.

 

Part of the sequence The Craft and the Community

Next post: "Of Gender and Rationality"

Previous post: "Collective Apathy and the Internet"

Comments (270)

Comment author: Yvain 15 April 2009 07:01:33PM 37 points [-]

IAWYC, but I think it sidesteps an important issue.

A perfectly rational community will be able to resist the barbarians. But it's possible, perhaps likely, that as you increase community rationality, there's a valley somewhere between barbarian and Bayesian where fighting ability decreases until you climb out of it.

I think the most rational societies currently existing are still within that valley. And that a country with the values and rationality level of 21st century Harvard will with high probability be defeated by a country with the values and rationality level of 13th century Mongolia (holding everything else equal).

I don't know who you're arguing against, but I bet they are more interested in this problem than in an ideal case with a country of perfect Bayesians.

Comment author: AnnaSalamon 15 April 2009 07:49:28PM *  14 points [-]

I agree such a valley is plausible (though far from obvious: more rational societies have better science and better economies; democracies can give guns to working class soldiers whereas aristocracies had to fear arming their peasants; etc.). To speculate about the underlying phenomenon, it seems plausible that across a range of goals (e.g., increasing one’s income; defending one’s society against barbarian hordes):

  • Slightly above-average amounts of rationality fairly often make things worse, since increased rationality, like any change in one’s mode of decision-making, can move people out of local optima.

  • Significantly larger amounts of rationality predictably make things better, since, after awhile, the person/society actually has enough skills to notice the expected benefits of “doing things the way most people do them” (which are often considerable; cultural action-patterns don’t come from nowhere) and to fairly evaluate the expected benefits of potential changes, and to solve the intrapersonal or societal coordination problems necessary to actually implement the action from which best results are expected.

Though I agree with Yvain's points elsewhere that we need detailed, concrete, empirical arguments regarding the potential benefits claimed from these larger amounts of rationality.

Comment author: PhilGoetz 16 April 2009 07:22:09PM *  9 points [-]

more rational societies have better science and better economies

More-developed societies develop technology; less-developed societies use them without paying the huge costs of development.

It's not evident which strategy is a win. Historically, it often appears that those who develop tech win. But not always. Japan has for decades been cashing in on American developments in cars, automation, steelmaking, ICs, and other areas.

If American corporations were required to foot the bill for the education needed for technological development, instead of having it paid for by taxpayers and by students, they might choose not to.

Comment author: whowhowho 19 February 2013 05:37:30PM *  4 points [-]

More-developed societies develop technology; less-developed societies use them without paying the huge costs of development.

If you patent something, you can charge what you like for the license. Were you suggesting that some countries ignore patent law; or that extenalities (such as failed R&D projects and education costs) don't get recompensed? Or something else?

But not always. Japan has for decades been cashing in on American developments in cars, automation, steelmaking, ICs, and other areas.

That's probably unfair. Japan files a lot of patents -- more than the US by some measures.

The subject was discussed at Overcoming Bias recently.

Comment author: zslastman 22 November 2012 09:05:20PM 1 point [-]

I'm no economist, but don't they already pay for it to a certain extent, in the form of the higher wages educated workers demand?

Comment author: Ritalin 16 February 2013 12:03:31AM 2 points [-]

I think that's more a function of the rarity of the educated individuals of the needed sort, than of the cost of their education.

Comment author: aausch 15 April 2009 12:48:48AM 8 points [-]

...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away.

I've set my line of retreat at a much higher extreme. I expect humans trained in rationality, when faced with a situation where they must abandon their rationality in order to win, to abandon their rationality. If the most effective way to produce a winning army is to irreversibly alter the brains of soldiers to become barbarians, the pre-lottery agreement, for me, would include that process (say brain washing, drugging and computer implants), as well as appropriate ways to pacify the army once the war has been completed.

I expect a rational society, when faced with the inevitability of war, would pick the most efficient way to pound the enemy into dust, and go as far as this, if required.

Caveats: I don't actually expect anything this extreme would be required for winning most wars. I have a nagging doubt, that it may not be possible to form a society of humans which is at the same time both rational, and willing to go to such an extreme.

Comment author: VAuroch 10 December 2013 05:26:53AM 0 points [-]

So basically the Culture-Idiran War version of "when you need soldiers, make people born to be warriors".

Comment author: Psy-Kosh 15 April 2009 03:40:07AM 7 points [-]

A couple comments. I think I overall agree, though I admit this is one of those "it gives me the willies, dun want to think about it too much" things for me. (Which, of course, means it's the sort of thing I especially should think about to see what ways I'm being stupid that I'm not letting myself see...)

Anyways, first, as far as methods of group decision making better than a chain of command type thing... I would expect that "better" in this context, would actually have to have a stricter requirement than merely "produces a more correct answer" but "produces a more correct answer QUICKLY", since group decision methods that we currently know of tend to, well, take more time, right?

Also, as far as precommiting to follow the captain (or other appropriate officer), that should be "up to limits where actually disobeying, even when taking into account Newcomb type arguments, is actually the Right Thing". (for example, commitment to obey until "egregious violations of morality" or something)

Semiformalizing this in this context seems tricky, though. Maybe something like this: for morality related disobediences, the rule is obey until "actually losing this battle/war/skirmish/whatever-the-correct-granularity-is would actually be preferable to obeying."?

I'm just stumped as far as what a sane rule for "captain is ordering us to do something that's really really self destructively stupid in a way that absolutely won't achieve anything useful" is. Maybe the "still obey under those circumstances" rule is the Right Way, if the probability of actually being (not just seeming to be) in such a situation is low enough that it's far better to precommit to obey unconditionally (up to extreme morality situations, as mentioned above)

Comment author: ChrisHibbert 15 April 2009 05:26:21PM 2 points [-]

You're right, in principle, about both things. There's a limit to our willingness to follow orders based on raw immorality of the orders. That's what Nuremburg, Mi Lai, and abu ghraib were about. But we also want to constrain our right to claim that we're disobeying for morality so we don't do it in the heat of action unless we're right. Tough call for the individual to make, and tough to set up proper incentives for.

But that's the goal. Follow order unless ..., but don't abuse your right to invoke the exception.

Comment author: BillyOblivion 27 March 2011 11:58:33AM 6 points [-]

To pick a 2 year old Nit:

That was what Nuremburg and Mi Lai were about, but that is not what Abu Ghraib was about. At Abu Ghraib most of the events and acts that were made public, and most of what people are upset about was done by people who were violating orders--with some exceptions, and from what I can tell most of the exceptions were from non-military organizations.

I'm not going to waste a lot more time going into detail, but the people who went to jail went there for violating orders, and the people who got "retired" got it because they were shitty leaders and didn't make sure their troops where well behaved.

In a "appeal to authority", I've been briefed several times over the last 20 years on the rules of land warfare, I've spent time in that area (in fact when the original article was posted I was about 30 miles from Abu Ghraib) and a very good friend of mine was called in to help investigate/document what happened there. When his NDA expires I intend to get him drunk and get the real skinny.

This doesn't change the thrust of your argument--which not only do I agree with, but is part and parcel of military training these days. It is hammered into each soldier, sailor, marine and airman that you do NOT have to follow illegal orders. Read "Lone Survivor", a book by Marcus Luttrell about his Seal Team going up against unwinnable odds in the mountains of Afghanistan--because they, as a team, decided not to commit a war crime. Yeah, they voted on it, and it was close. , but one of those things was not like the other and I felt I had to say something.

Comment author: ChrisHibbert 27 March 2011 08:26:20PM 0 points [-]

I'm not completely convinced that all the people who were punished believed they were not doing what their superiors wanted. I understand that that's the way the adjudication came out, but that's what I would expect from a system that knows how to protect itself. But I'll admit I haven't paid close attention to any of the proceedings.

Is there any good, short, material laying out the evidence that none of the perpetrators heard anything to reinforce the mayhem from their superiors--non-coms etc. included? Your sentence "the people who went to jail went there for violating orders" leaves open the possibility that some of the illegal activity was done by people who thought they were following orders, or at least doing what their superiors wanted.

If you are right, then I'll agree that Abu Ghraib was orthogonal to the main point. But I'm not completely convinced, and it seems likely to me that it looks exactly like a relevant case to the Arab street. Whether or not there were explicit orders from the top of the institution, it looked to have been pervasive enough to have to count as policy at some level.

Comment author: NancyLebovitz 27 March 2011 10:38:43PM 3 points [-]

Torture and Democracy argues that torture is a craft apprenticeship technique, and develops when superiors say "I want answers and I don't care how you get them".

This makes the question of what's been ordered a little fuzzy.

Comment author: BillyOblivion 15 April 2011 01:38:11PM 3 points [-]

(This is a reply to both Mr. Hibbert and Ms. Lebovitz)

I've got a couple problems here--one is that there wasn't an incident @Abu Grhaib, there were a couple periods of time in which certain classes if things happened. Another is that some military personnel (this is from memory since it's just not worth my time right now to google it) from a reservist MP unit, many of whom were prison guards "in real life" abused some prisoners during one or two shifts after a particularly brutal (in terms of casualties to American forces from VBIEDs/Suicide bombers. These particular abuses (getting detainees naked piled up etc) were not done as part of information gathering, and IIRC many of those prisoners weren't even considered intelligence sources. Abu Grhaib at the time held both iraqi criminal and insurgent/terrorist suspects.

I haven't paid much attention to the debate since, and have not wasted the cycles on reading any other sources. As I indicated, I've been in the military and rejoined the armed forces around the time that story broke (or maybe later, I'm having trouble nailing down exactly when the story broke).

One thing that did come out was that during the period of time the military abuses took place (as in the shifts that they happened on) there WERE NO OFFICERS PRESENT. That is basically what got the Brigadier General in charge "retired". (she later whined about how she was mistreated by the system. I've got no sympathy. Her people were poorly trained and CLEARLY poorly lead from the top down).

There were other photographs that surfaced of "fake torture"--an detainee dressed in a something that looked like a poncho with jumper cables on his arms--he believe the jumper cables were attached to a power source and would light him up like a christmas tree if he stepped down (again IIRC). This was the actions of a non-military questioner, and someone who thought he was following the law--after all he wasn't doing anything by scaring the guy there was (absent a weak heart) no risk of injury. It was a really awful looking photo though.

Ms. Levbovitz:

I've known people (not current military, Vietnam era) who engaged in a variety of rather brutal interrogation techniques. The one I have in mind was raised in a primitive part of the US were violence and poverty were more common that education, and spent a long time fighting an enemy that would do things like chop off arms of people who had vaccination scars.

His superiors didn't have to tell him anything. (Note I have never said that "we" haven't engaged in these sorts of behaviors, only that it didn't happen under our watch in Abu Grhaib (some of the stuff that happened before we took over, when it was Saddam's prison? It's hard for me to watch and I have a bit of tough stomach for that sort of thing).

And this notion that "a person being tortured is likely to say whatever he thinks his captors want to hear, making it one of the poorest methods of gathering reliable information" is pure bullshit.

Yes, if I grab random people off the street and waterboard them I will get no useful information. If 5 people break into my house and kidnap my daughter, but only 4 get out he WILL give me the information I want. He will say anything to stop the pain, and that anything happens to be what I want to hear.

This is again orthagonal to what I was discussing with Mr. Hibbert--I was not claiming that torture doesn't happen (it does), but that most of what the public knows about what happened at Abu Grhaib wasn't torture or abuse ordered by those above, and in some cases it was not even what the perpetrator thought of as abuse.

Comment author: Psy-Kosh 15 April 2009 07:04:06PM 0 points [-]

Well, that's more "what laws should there be/what sort of enforcement ought there to be?" I was more asking with regards to "what underlying rule is the Rational Way"? :)

ie, some level of being willing to do what you're told even if it's not optimal is a consequence of the need to coordinate groups and stuff. ie, the various newcomb arguments and so on.. I'm just trying to figure out where that breaks down, how stupid the orders have to seem before that implodes, if ever.

The morality one was easier, since the "obey even though you think you know better" thing is based on one boxing and having the goal of winning that battle/whatever. If losing would actually be preferable to obeying, then the single iteration PD/newcomb problem type stuff doesn't seem to come up as strongly.

Any idea what an explicit rule a rationalist should follow with regards to this might look like? (not necesarally "how do we enforce it" though. Separate question)

Even an upper limit criteria would be okay. ie, something of the form "I don't know the exact dividing line, but I think I can argue that at least if it gets to this point, then disobeying is rational"

(which is what I did for the morality one, with the "better to lose than obey" criteria.)

Comment author: ChrisHibbert 16 April 2009 06:56:10PM 2 points [-]

No, I don't have a boiled down answer. When I try to think about it, rational/right includes not just the outcome of the current engagement, but the incentives and lessons left behind for the next incident.

Okay, here's one example I've used before: torture. It's somewhat orthogonal to the question of following orders, but it bears on the issue of setting up incentives for how often breaking the rules is acceptable. I think the law and the practice should be that torture is illegal and punished strictly. If some person is convinced that imminent harm will result if information isn't extracted from a suspect, and that it's worth going to jail for a long time in order to prevent the harm, then they are able to (which is not the same as authorized) torture. But it's always at the cost of personal sacrifice. So, if you think a million people will die from a nuke, and you're convinced you can actually get information out of someone by immoral and prohibited means (which I think is usually the weakest link in the chain) and you're willing to give up your life or your liberty in order to prevent it, then go for it.

But don't ever expect a hero's welcome for your sacrifice. It's a bad choice that's (conceivably) sometimes necessary. The idea that any moral society would authorize the use of torture in routine situations makes me sick.

Comment author: rhollerith 16 April 2009 09:38:05PM *  3 points [-]

I think people exist who will make the personal sacrifice of going to jail for a long time to prevent the nuke from going off. But I do not think people exist who will also sacrifice a friend. But under American law that is what a person would have to do to consult with a friend on the decision of whether to torture: American law punishes people who have foreknowledge of certain crimes but do not convey their foreknowledge to the authorities. So the person is faced with making what may well be the most important decision of their lives without help from any friend or conspiring somehow to keep the authorities from learning about the friend's foreknowledge of the crime. Although I believe that lying is sometimes justified, this particular lie must be planned out simultaneously with the deliberations over the important decision -- potentially undermining those deliberations if the person is unused to high-stakes lies -- and the person probably is unused to high-stakes lies if he is the kind of person seriously considering such a large personal sacrifice.

Any suggestions for the person?

Comment author: TheOtherDave 22 November 2010 02:55:35PM 2 points [-]

Any suggestions for the person?

Discuss a hypothetical situation with your friend that happens to match up in all particulars with the real-world situation, which you do not discuss.

It isn't actually important here that your friend be fooled, the goal is to give your friend plausible deniability to protect her from litigation.

Comment author: Psy-Kosh 16 April 2009 07:28:26PM 0 points [-]

Yes. I am sympathetic to that view of "how to deal with stuff like torture/etc", but that doesn't answer "when to do it".

ie, I wasn't saying "when should it be 'officially permitted'?" but rather at what point should a rationalist do so? how convinced does a rationalist need to be, if ever?

Or did I completely misunderstand what you were saying?

Comment author: ChrisHibbert 17 April 2009 05:37:36AM 1 point [-]

No, you understood me. I sidestepped the heart of the question.

This is an example where I believe I know what the right incentives structure of the answer is. But I can't give any guidance on the root question, since in my example case, (torture) I don't believe in the efficacy of the immoral act. I don't think you can procure useful information by torturing someone when time is short. And when time isn't short, there are better choices.

Comment author: moshez 25 March 2011 11:33:58PM 1 point [-]

I guess the big question here is why do you not believe it. Since you (and I!) would prefer to live in a world where torture is not effective, we must be aware that our biases is to believe it is not effective -- it makes the world nicer. Hence, we must conciously shift up our belief in the effectiveness of torture from our "gut feeling." Given that, what evidence have you seen that for the purposes of solving NP-like problems (meaning, a problem where a solution is hard to find but easy to verify like "where is the bomb hidden") is not effective. I would say that for me personally, the amount that my preferences shift in the presence of relatively mild pain ("I prefer not to medicate myself" vs. "Gimme that goddamn pill") is at least cause to suspect that someone who is an expert at causing vast amounts of pain would be able to make me do things I would normally prefer not to do (like tell them where I hid the bomb) to stop that pain.

Of course, torture used for unverifiable information is completely useless for exactly the same reason -- the prisoner will say anything they can get away with to make the pain stop.

Comment author: ChrisHibbert 27 March 2011 08:40:12PM *  2 points [-]

Maybe my previous answer would have been cleaner if I had said "I don't think I can procure useful information by torturing someone when time is short." It's a relatively easy choice for me, since I doubt that even with proper tools, that I could appropriately gauge the level of pain to the necessary calibration in order to get detailed information in a few minutes or hours.

When I think about other people who might have more experience, it's hard to imagine someone who had repeatedly fallen into the situation where they were the right person to perform the torture so they had enough experience to both make the call, and effectively extract information. Do you want to argue that they could have gotten to that point without violating our sense of morality?

Since my question is "What should the law be?", not "is it ever conceivable that torture could be effective?" I still have to say that the law should forbid torture, and people should expect to be punished if they torture. There may be cases where you or I would agree that in that circumstance it was the necessary thing to do, but I still believe that the system should never condone it.

Comment author: moshez 28 March 2011 12:58:01AM 0 points [-]

You talked about two issues that have little to do with each other: 1. What should the law be? (I didn't argue with your point here, so re-iterating it is useless?) 2. A statement that was misleading: apparently you meant that you're not a good torturer. That is not impossible. I think that given a short amount of time, with someone who knows something specific (where the bomb is hidden), my best chance (in effective, not moral, ordering) is to torture them. I'm not a professional torturer, I luckily never had to torture anyone, but like any human, I have an understanding in pain. I've watched movies about torture, and I've heard about waterboarding. If I decided that this was the ethical thing to do (which be both agree, in some cases is possible), and I was the only one around, I'd probably try waterboarding. It's risky, there's a chance the prisoner might die, but if I have one hour, and 50 million people will die otherwise, I don't see any better way. So let me ask you flat out -- I'm assuming you also read about waterboarding, and that when you need to, you have access to the WP article about waterboarding. What would you do in that situation? Ask nicely?

All that does not go to condone torture. I'm just saying, if a nation of Rationalists is fighting with the Barbarians, then it's not necessarily in their best interests to decide they will never torture no matter what.

Comment author: ChrisHibbert 02 April 2011 05:44:54PM 3 points [-]

My point wasn't just that I wouldn't make a good torturer. It seems to me that ordinary circumstances don't provide many opportunities for anyone to learn much about torture, (other than from fictional sources). I have little reason to believe that inexperienced torturers would be effective in the time-critical circumstances that seem necessary for any convincing justification of torture. You may believe it, but it's not convincing to me. So it would be hard to ethically produce trained torturers, and there's a dearth of evidence on the effectiveness of inexperienced torturers in the circumstances necessary to justify it.

Given that, I think it's better to take the stance that torture is always unethical. There are conceivable circumstances when it would be the only way to prevent a cataclysm, but they're neither common, nor easy to prepare for.

And I don't think I've said that it would be ethical, just that individuals would sometimes think it was necessary. I think we are all better off if they have to make that choice without any expectation that we will condone their actions. Otherwise, some will argue that it's useful to have a course of training in how to perform torture, which would encourage its use even though we don't have evidence of its usefulness. It seems difficult to produce evidence one way or another on the efficacy of torture without violating the spirit of the Nuremberg Code. I don't see an ethical way to add to the evidence.

You seem to believe that sufficient evidence exists. Can you point to any?

You wanted an explicit answer to your question. My response is that I would be unhappy that I didn't have effective tools for finding out the truth. But my unhappiness doesn't change the facts of the situation. There isn't always something useful that you can do. When I generalize over all the fictional evidence I've been exposed to, it's too likely that my evidence is wrong as to the identity of the suspect, or he doesn't have the info I want, or the bomb can't be disabled anyway. When I try to think of actual circumstances, I don't come up with examples in which time was short and the information produced was useful. I also can't imagine myself personally punching, pistol-whipping, pulling fingernails, waterboarding, etc, nor ordering the experienced torturer (who you want me to imagine is under my command) to do so.

Sorry to disappoint you, but I don't believe the arguments I've heard for effectiveness or morality of torture.

Comment author: Psy-Kosh 17 April 2009 09:20:44PM 0 points [-]

Yeah, the "do it, but keep it illegal and be punished for it even if it was needed" is a possible solution given "in principle it may be useful", which is a whole other question.

But anyways, I was talking about "when should a rationalist soldier be willing to disobey in the name of 'I think my CO is giving really stupid orders'?", since I believe I already have a partial solution to the "I think my CO is giving really immoral orders" case (as described above)

As far as when torture would even be plausibly useful (especially plausibly optimal) for obtaining info? I can't really currently think of any non-contrived situations.

Comment author: nshepperd 22 November 2010 11:42:49PM 1 point [-]

How about this upper limit: when the outcome of (everyone) following orders would be worse than everyone doing their own thing, disobey.

Comment author: Aurini 16 April 2009 02:47:34AM *  4 points [-]

This struck me as relevant:

"If we desire to defeat the enemy, we must proportion our efforts to his powers of resistance. This is expressed by the power of two factors which cannot be separated, namely, the sum of available means and the strength of the Will."

Carl Von Clausewitz, On War, Chapter 1, Section 5. Utmost Exertion of Powers

(I'm still planning on putting together a post of game theory, war, and morality, but I think most of you will be inclined to disagree with my conclusions, so I'm really doing my homework for this one.)

Comment author: CronoDAS 16 April 2009 03:20:12AM 4 points [-]

This is true. In order to win a war, you must convince your enemy that he has lost. Otherwise, he will simply rise to fight again, at a time of his own choosing.

Israel has won many battles, but I don't think it's won any wars - its enemies are still trying to fight it.

Comment author: Broggly 12 January 2011 07:49:05PM 2 points [-]

The idea of non-violent civil defence is based entirely on this idea. The first step is to ensure everyone knows that just because the enemy has lots of armed men marching through the streets doesn't mean you've lost. The second step is to be as uncooperative, incompetant, disruptive and annoying as possible to destroy the enemy's will, and encourage them to give up and go home.

Comment author: Barry_Cotter 12 January 2011 11:16:40PM 6 points [-]

This will only work against enemies who are unwilling to make atrocities part of their official pacification doctrine. It took killing ~30% of (male?) Afghans to convert them to Islam.

On a slightly less odious level, collective punishment and population dispersal/resettlement work pretty well.

Comment author: Broggly 13 January 2011 12:07:21PM 1 point [-]

Yes, like all strategies it depends on the economic, geopolitical, and technological situation you find yourself in. If the enemy is willing to depopulate the land so that they can colonise it, then of course you're not going to be able to win through non-cooperation but if they need you as workers then there comes a point where your willingness to sustain losses is so great that in order to blackmail you into submission they have to expend so many resources and destroy so much of their potential labour force that it's not worth doing. That is, unless their goal is directly achieved by commiting atrocities, they are only ably to win by doing so if their willingness to commit atrocities (or other). Also, there's the effect on morale of commiting atrocities. Iraqi soldiers described how disturbing Iranian Human Wave attacks were, and they were killing (para)military forces who were trying to kill them and invade their homeland. The psychological impact of killing civilians would presumably be much greater. Even if the leaders were willing to do so, the soldiers could lose their will to attack unarmed targets and have to be rotated out, which is expensive and could destroy the invader's national will to fight. While the Prague invasion was ultimately able to suppress the Czechs (until the late '80s) the Russians did have a lot of morale problems and needed to rotate their troops out very often. Population dispersal and resettlement need to be worked out on a case by case basis. It may be possible and worthwhile to resist, depending on how able the enemy army is to physically pick up and drag the citizenry to the trains or whatever (or how well your side has prepared their supplies for being starved out). Population dispersal relies on the enemy being able to coerce you to move from one place to another, and can be considered in the same way as anything else the enemy wants to coerce you to do.

I'm not a pacifist, and I'm trying to avoid believing in it to seem wise ("violence doesn't solve anything") or be contrary ("Everyone thinks armed defence is necessary, so if I disagree it proves I'm smarter"), but as a non-expert I think it's a plausible strategy. While it wouldn't beat the Barbarians (just as standing in front of a trolley won't stop it, no matter how fat you are), it could beat many real world enemies.

Comment author: CronoDAS 21 August 2011 04:57:18AM *  1 point [-]

I wonder how well this would have worked on the Mongols? They were certainly willing to slaughter all the inhabitants of a city that resisted - but if you shut up and paid your taxes they usually wouldn't kill you. I don't know what they would do with people who were willing to give up their property but not willing to perform labor for them. The Mongols frequently conscripted artisans, engineers, and other skilled workers from conquered peoples into performing supporting roles in their armies - saying "no" was probably a good way to get a sword run through you.

Comment author: Aurini 16 April 2009 09:28:14AM 0 points [-]

Well, maybe not everyone will innately want to disagree with me... but I still think this will undermine some preconceptions. Wish me luck (I'll do my damndest).

Cheers.

Comment author: MBlume 16 April 2009 09:30:44AM 2 points [-]

I think most of you will be inclined to disagree with my conclusions, so I'm really doing my homework for this one.

Sounds like it should be a fun discussion then -- I'll look forward to it =)

Comment author: [deleted] 23 October 2011 12:16:28PM 0 points [-]

Have you written this since the post was made?

Comment author: Aurini 27 October 2011 07:10:51AM 2 points [-]

Yes; thank you. At http://www.staresattheworld.com/

Nothing particularly relevant to LW, mind you - and not quite as rigorous as this site would demand - more addressing social/political issues there, with a Reactionary bent. Also YouTubing at: http://www.youtube.com/user/Aurini

I really ought to finish that series on sales/manipulation though.

Comment author: [deleted] 27 October 2011 08:55:27AM *  1 point [-]

I'll check out your stuff.

Edit: A bit off topic. I found your argument, that Democracy being interesting is a red flag, very interesting.

Comment author: jimmy 15 April 2009 05:44:18PM 4 points [-]

What about just paying them to fight? You can have an auction of sorts to set the price, but in the end they'd select themselves. You could still use the courage enhancing drugs and shoot those who try to breach the contract.

One might respond "no amount of (positive) money could convince me to fight a war", but what about at some negative amount? After all, everyone else has to pay for the soldiers.

Comment author: matt 15 April 2009 10:24:40PM *  1 point [-]

That "auction of sorts" would be the normal market mechanism, right? There are death rates that vary between professions now, with risks priced into the prevailing market wage for those professions. I don't see why soldiery should be different.

Comment author: jimmy 16 April 2009 10:33:26PM 0 points [-]

Well, yeah, nothing special. It's just that the government doesn't usually try to use smart mechanisms in deciding what to pay people (soldiers) so unless we're talking about a private army, then you gotta specify that you pay them right.

Comment author: PhilGoetz 16 April 2009 08:39:46PM 0 points [-]

That's why a contractor in Iraq today makes about $200,000/yr for a job that would pay $70,000/yr in the US. (A soldier makes, I think, a median of something like $40,000/yr.)

Comment author: Nebu 15 April 2009 08:04:11PM 0 points [-]

The problem with this idea is that I have a very strong expectation that the barbarians are going to kill me, then no amount of money would convince me to fight. Even if you enforce payment from all the non-fighters, I still wouldn't fight. Better to incur a trillion dollars of debt than to die, right? Especially if everyone else around me also incurs a trillion dollars of debt such that after the war, we all agree that this debt is silly and nullify it.

Comment author: matt 15 April 2009 10:21:56PM *  2 points [-]

As a soldier you're not facing certain death at any of the relevant decision points (a statistically irrelevant number of exceptions exist to this rule). You're facing some probability of death. When you get into your car or onto your bike you're facing some probability of death. Why do you do that? Commanders don't (irrelevant exceptions exist) send troops to certain death, because, rationalist or not, they don't go. War is not like StarCraft.

Comment author: orthonormal 15 April 2009 11:13:40PM 14 points [-]

Eliezer's point is that, given a certain decision theory (or, failing that, a certain set of incentives to precommitment), rational soldiers could in fact carry out even suicide missions if the tactical incentives were strong enough for them to precommit to a certain chance of drawing such a mission.

This has actually come up: in World War II (citation in Pinker's "How the Mind Works"), bomber pilots making runs on Japan had a 1 in 4 chance of survival. Someone realized that the missions could be carried out with half the planes if those planes carried bombs in place of their fuel for the return trip; the pilots could draw straws, and half would survive while the other half went on a suicide mission. Despite the fact that precommitting to this policy would have doubled their chances of survival, the actual pilots were unable to adopt this policy (among other things, because they were suspicious that those so chosen would renege rather than carry out the mission).

I think Eliezer believes that a team of soldiers trained by Jeffreysai would be able to precommit in this fashion and carry the mission through if selected. I think that, even if humans can't meet such a high standard by training and will alone, that there could exist some form of preparation or institution that could make it a workable strategy.

Comment author: CronoDAS 21 August 2011 04:29:38AM 3 points [-]

This has actually come up: in World War II (citation in Pinker's "How the Mind Works"), bomber pilots making runs on Japan had a 1 in 4 chance of survival.

I'll need to see that citation, actually; it couldn't possibly have been a 75% fatality rate per mission. (When my father says a number is bogus, he's usually right.) Even Doolittle's raid, in which the planes did not have enough fuel to return from Japan but instead had to land in Japan-occupied China, had a better survival rate than one in four: of the 80 airmen involved, 4 were killed and 8 were captured. (Of the eight who were captured, four died before the war ended.)

Comment author: orthonormal 21 August 2011 02:05:41PM 1 point [-]

Correction- it's for a pilot's entire quota of missions, not just one:

Decades before Tooby and Cosmides spelled out this logic, the psychologist Anatol Rapoport illustrated it with a paradox from World War II. (He believed the scenario was true but was unable to verify it.) At a bomber base in the Pacific, a flier had only a twenty-five percent chance of surviving his quota of missions. Someone calculated that if the fliers carried twice as many bombs, a mission could be carried out with half as many flights. But the only way to increase the payload was to reduce the fuel, which meant that the planes would have to fly on one-way missions. If the fliers would be willing to draw lots and take a one-in-two chance of flying off to a certain death instead of hanging on to their three-in-four chance of flying off to an unpredictable death, they would double their chance of survival; only half of them would die instead of three-quarters. Needless to say, it was never implemented. Few of us would accept such an offer, though it is completely fair and would save many lives, including, possibly, our own. The paradox is an intriguing demonstration that our mind is equipped to volunteer for a risk of death in a coalition but only if we do not know when death will come.

Comment author: CronoDAS 21 August 2011 11:38:17PM *  5 points [-]

Yeah, if it's for an entire quota of missions, the math doesn't work out - each pilot normally would fly several missions, making the death rate per flight less than 50%, so it wouldn't be a good deal.

Comment author: Strange7 23 March 2011 08:07:28PM 0 points [-]

(among other things, because they were suspicious that those so chosen would renege rather than carry out the mission)

Let's say somebody who flies out with extra bombs instead of fuel has an overall 0.1% chance of making it back alive through some heroic exploit. Under the existing system, with 25% survival, you're asking every pilot to face two half-lives worth of danger per mission. With extra bombs, that's half as many missions, but each mission involves ten half-lives worth of danger. Is it really all that rational to put the pilots in general in five times as much danger for the same results? After all, drawing the long straw doesn't mean you're off the hook. Everybody's going to have to fly a mission sooner or later.

Comment author: orthonormal 23 March 2011 11:28:35PM 7 points [-]

Thinking in terms of "half-lives of danger" is your problem here; you're looking at the reciprocal of the relevant quantity, and you shouldn't try and treat those linearly. Instead, try and maximize your probability of survival.

It's the same trap that people fall into with the question "if you want to average 40 mph on a trip, and you averaged 20 mph for the first half of the route, how fast do you have to go on the second half of the route?"

Comment author: Alicorn 23 March 2011 11:40:08PM *  1 point [-]

"if you want to average 40 mph on a trip, and you averaged 20 mph for the first half of the route, how fast do you have to go on the second half of the route?"

How do you answer this question?

Edit: MBlume kindly explained offsite before the offspring comments were posted. Er, sorry to have wasted more people's time than I needed.

Comment author: [deleted] 25 March 2011 01:39:32AM 3 points [-]

It's still an interesting exercise to try to come up with the most intuitive explanation. One way to do it is to start by specifying a distance. Making the problem more concrete can sometimes get you away from the eye-glazing algebra, though of course then you need to go back and check that your solution generalizes.

A good distance to assign is 40 miles for the whole trip. You've gone 20 mph for the first half of the trip, which means that you traveled for an hour and traveled 20 miles. In order for your average speed to be 40 mph you need to travel the whole 40 miles in one hour. But you've already traveled for an hour! So - it's too late! You've already failed.

Comment author: Alicorn 25 March 2011 01:41:45AM *  3 points [-]

Yes, that's roughly how MBlume explained it (edited for concision and punctuation):

MBlume: I can help you! or could if there was an answer...

Alicorn: Good, I can delete the comment before it gets downvoted again! I half-suspected there was not, and that it depended on the distance of the journey, but wasn't sure

MBlume: that is a silly thing for people to downvote. it doesn't actually, but it is impossible. you have to cover the rest of the distance instantly to average 40mph

Alicorn: Oh, and they won't let your car onto the transporter pad, gotcha

MBlume: nodnod

Alicorn: ...why do you have to cover the distance instantly?

MBlume: (they are jerks.) because... let's pretend the distance is 40 miles. in order to average 40 mph

Alicorn: you need to get there in an hour

MBlume: you would have to cover the whole distance in an hour, nodnod

Alicorn: ahhhh, now I see.

MBlume: but you drive half of that (20 miles) at 20 mph... nodnod

Alicorn: you took an hour to go 20 miles at - yes. that.

MBlume: ^_^

Comment author: [deleted] 25 March 2011 02:02:35AM *  2 points [-]

If that's an actual chat record, I'm getting old for this world. ... okay, on a third read-through, I'm starting to comprehend the rhythm and lingo.

Comment author: JGWeissman 23 March 2011 11:56:23PM 2 points [-]

Suppose the total trip is a distance d.

d = (average speed) (time)
time = d / (average speed)

So if your average speed is 40 (mph), your total time is d/40.

You have already travelled half the distance at speed 20 (mph), so that took time (d/2)/20 = d/40. Your time left to complete the trip is your total time minus the time spent so far: d/40 - d/40 = 0. In this time you have to travel the remaining distance d/2, so you have travel at a speed (d/2)/0 = infinity, which means it is impossible to actually do.

Comment author: rhollerith_dot_com 23 March 2011 11:57:41PM 0 points [-]

Let t1 be the time taken to drive the first half of the route.

Let t2 be the time taken to drive the second half.

Let d1 be the distance traveled in the first half.

Let d2 be the distance traveled in the second half.

Let x be what we want to know (namely, the average speed during the second half of the route).

Then the following relations hold:

40 * (t1 + t2) = d1 * d2.

20 * t1 = d1.

x * t2 = d2.

d1 = d2.

Use algebra to solve for x.

To average 40 mph requires completing the trip in a certain amount of time, and even without doing any algebra, I notice that you will have used all of the available time just completing the first half of the trip, so you're speed would have to be infinitely fast during the second half.

I am pretty confident in that conclusion, but a little algebra will increase my confidence, so let us calculate as follows: the time you have to do the trip = t1 + t2 = d1 / 40 + d2 / 40, which (since d1 = d2) equals d1 / 20, but (by equation 2) d1 / 20 equals t1, so t2 must be zero.

Comment author: FAWS 25 March 2011 12:30:25AM *  4 points [-]

I expect a high probability of this explanation being completely useless to someone who professes being bad at math. Their eyes are likely to glaze over before the half way point and the second half isn't infinitely accessible either.

Comment author: Alicorn 25 March 2011 01:30:23AM 0 points [-]

I already had the problem explained to me before I saw the grandparent, but I think you're right - I might have been able to puzzle it out, but it'd have been work.

Comment author: rhollerith_dot_com 25 March 2011 01:22:43AM 0 points [-]

I have to agree that a shorter explanation with just words in it would be bettter for someone with significant aversive math conditioning.

Comment author: Vaniver 26 March 2011 06:16:01PM 2 points [-]

40 * (t1 + t2) = d1 * d2.

It also doesn't help the explanation when you make an error. That should be d1 + d2.

Comment author: rhollerith_dot_com 26 March 2011 06:45:39PM 0 points [-]

Acknowledged.

Comment author: FAWS 25 March 2011 12:25:39AM 2 points [-]

After all, drawing the long straw doesn't mean you're off the hook. Everybody's going to have to fly a mission sooner or later.

The probability of drawing the long straw twice in a row is four times as high as the probability of making it back twice in a row given 25% survival.

Comment author: PhilGoetz 16 April 2009 07:29:24PM 1 point [-]

How did Japan convince pilots to be kamikazes?

Comment author: orthonormal 17 April 2009 04:55:45PM 4 points [-]

Chiefly by a code of death-before-dishonor (and death-after-dishonor) which makes sense for a warring country to precommit to. Though it doesn't seem there was much conscious reasoning that went into the code's establishment, just an evolutionary optimization on codes of honor among rival daimyo, which resulted in the entire country having the values of the victorious shoguns instilled.

Comment author: jimmy 16 April 2009 10:36:34PM 3 points [-]

I'm no history expert, but I remember hearing something about cutting off a finger and promising to kill anyone that shows up missing that finger.

Comment author: The_Duck 27 August 2012 09:30:29AM 0 points [-]

I think Eliezer believes that a team of soldiers trained by Jeffreysai would be able to precommit in this fashion and carry the mission through if selected. I think that, even if humans can't meet such a high standard by training and will alone, that there could exist some form of preparation or institution that could make it a workable strategy.

For example, I suspect Jeffreysai would have no trouble proposing that anyone designated for a suicide mission who reneged would be tortured for a year and then put to death.

Comment author: jimmy 16 April 2009 10:43:45PM 1 point [-]

Well, that's exactly the objection I tried to cover with the second half of my comment.

The thing is that you're assuming that it won't actually be paid so that there effectively is no debt or pay. Under that assumption of course it won't work, since you're not actually doing it.

The debt is not silly, it's a way of saving your country. If you have good debt collectors, and you owe enough you'll want to fight. Use your imagination.

In the few cases where the probability of dying is near one instead of near zero, being as productive as possible and getting just enough money to survive out (third world type real poor, not "poor" american kind) might still be better than dying. In these cases you'd basically have to punish people that can't pay instead of helping them pay as much as they can.

Comment author: ErnstMuller 08 May 2011 10:46:38PM 0 points [-]

Then you are suffering strongly from the Bystander-effect. http://en.wikipedia.org/wiki/Bystander_effect

One could translate this effect as "the warm fuzzy feeling that there are enough people around which will do the job and oneself doesn't need to bother".

The effect is very strong. So, adjust your thoughts: The barbarians will kill you either way. There aren't enough people which care, so you yourself have to rise to do something. (That also applies to everyday life: If you want something done, especially in a busy and people-rich environment, do it yourself.)

Comment author: haig 15 April 2009 08:01:09AM *  4 points [-]

In group #2, where everybody at all levels understand all tactics and strategy, they would all understand the need for a coordinated, galvanized front, and so would figure out a way to designate who takes orders and who does the ordering because that is the rational response. The maximally optimal rational response might be a self-organized system where the lines are blurred between those who do the ordering and those who follow the orders, and may alternate in round-robin fashion or some other protocol. That boils down to a technical problem in operations research or systems engineering.

On another note, sometimes the most rational response for 'winning' will conflict with our morality, or at least, our emotions. Von Neumann advocated a first strike response against the soviets early on, and he might have been right. Even if his was the most rational decision, you do see the tangle of problems associated with it. What if winning means losing a part of you that was a huge part of the reason you were fighting in the first place.

Comment author: Annoyance 15 April 2009 03:35:00PM 13 points [-]

"Jeffreyssai probably wouldn't give up against the Evil Barbarians if he were fighting alone."

WWJD, indeed.

But since Jeffreyssai is a fictional creation of Eliezer Yudkowsky, appealing to what we imagine he would do is nothing more than an appeal to Eliezer Yudkowsky's ideas, in the same way that trying to confirm a newspaper's claim by picking up another copy of the same edition is just appealing to the newspaper again.

How can we test the newspaper instead of appealing to it?

Comment author: emeritusl 15 April 2009 01:36:28AM 3 points [-]

Any one reminded of 'The World of Null-A'? Rationalists do win the war over barbarians in this case.

Comment author: Eliezer_Yudkowsky 15 April 2009 01:38:23AM 1 point [-]

Depending on how broadly you look at it, this trope has been done around a thousand different ways in science fiction.

Comment author: billswift 15 April 2009 04:48:27AM *  1 point [-]

And the other way around, too, don't forget Arthur Clarke's "Superiority". Of course, the losers there weren't rationalists, just bureaucrats. There are plenty of people though who consider bureaucracy the most rational means of organizing large groups of people.

Comment author: NancyLebovitz 22 November 2010 02:00:29PM 0 points [-]

If that's the van Vogt I was reminded of, it's interesting because it has it that rational people will independently agree on what needs to be done in a military situation (iirc, at least in simple early stages of guerrilla warfare), and not need centralized coordination.

I have no idea whether this is even plausible, but I'm not dead certain it's wrong either.

Comment author: TheOtherDave 22 November 2010 03:23:18PM 1 point [-]

As stated, it strikes me as unlikely, but something similar seems plausible.

People who have been trained consistently, and can rely on each other to behave in accordance with that training, find it easier to coordinate bottom-up. (Especially, as you say, in guerrilla situations.)

It's not precisely "we all independently agree" but "we each have a pretty good intuition about what everyone else will do, and can take that prediction into account when deciding what to do."

64 such people might independently decide that what's necessary is to surround a target, realize the other 63 will likely conclude the same thing, select a position on the resulting circle unlikely to be over-selected (if all 64 are starting from the same spot, they can each flip a coin three times to pick an octant and then smooth out any clumpiness when they get there; if they are evenly distributed to start with they can pick the spot nearest to them, etc.), and move there.

This is a recurring theme in Dorsai... the soldiers share a very specific and comprehensive training that allows for this kind of coordinated spontaneity. Of course, this is all fictional evidence, but something like this ought to be true in real life. The question is under what circumstances this sort of self-organization does better than centralized strategic planning.

As the OP uses the term, at least, the Dorsai are more rational than their opponents, even though they might not describe themselves that way. We know this, because they consistently make choices that let them win.

Comment author: pjeby 22 November 2010 03:37:44PM 4 points [-]

Of course, this is all fictional evidence, but something like this ought to be true in real life. The question is under what circumstances this sort of self-organization does better than centralized strategic planning.

From the USMC Warfighting Doctrine manual, pp. 62-63 (PDF pages 64-65):

Our philosophy of command must also exploit the human ability to communicate implicitly. We believe that implicit communication -- to communicate through mutual understanding, using a minimum of key, well-understood phrases or even anticipating each other's thoughts is a faster, more effective way to communicate than through the use of detailed, explicit instructions. We develop this ability through familiarity and trust, which are based on a shared philosophy and shared experience.

(Believe it or not, I didn't add any emphases to the above: the italicized phrases are that way in the original!)

Now, the USMC warfighting doctrine is specifically intended for state-vs-state warfare, so one may take it with a grain of salt as to whether it's suitable for dealing with a barbarian horde or other guerrillas. But, at least it's some non-fictional evidence. ;-)

Comment author: PhilGoetz 16 April 2009 07:36:03PM 6 points [-]

I'm wondering whether the rationalists can effectively use mercenaries. Why doesn't the US have more mercenaries than US soldiers? In the typically poverty-stricken areas where US forces operate, we could hire and equip 100-1000 locals for the price of a single US soldier (which, when you figure in health-care costs, is so much that we basically can't afford to fight wars using American soldiers anymore). We might also have less war opposition back at home if Americans weren't dying.

Comment author: Ford 26 March 2011 11:05:59PM 3 points [-]

We do use mercenaries: http://www.newsweek.com/2010/08/10/mercenaries-in-iraq-to-take-over-soldiers-jobs.html

But there might be cheaper options. If we paid Afghan girls $10/day to go to school, would the Taliban collapse?

We could be a little more subtle. Start by offering jobs to do something the Taliban wouldn't consider threatening -- Mechanical Turk work-from-home stuff not requiring literacy, via some kind of specialized radio or satellite link with no access to porn or feminism or anything the Taliban would object to. Every family wants one of those terminals and they can make twice as much money if the girls work (from home) too. Gradually offer higher pay for higher skill levels, starting with nonthreatening stuff like arithmetic but escalating to translating the Koran and then to tasks that would involve reading a wide variety of secular material, analyzing political and judicial systems of different countries (still maybe disguised as a translating job)...

Comment author: CronoDAS 26 March 2011 11:41:24PM *  2 points [-]

But there might be cheaper options. If we paid Afghan girls $10/day to go to school, would the Taliban collapse?

There's no shortage of Afghan girls who already want to go to school or of parents who want to send them. The problem is that there are people who mutilate girls who attend these schools. In the short run, at least, sticks are often more effective at getting the acquiescence of the population than carrots; when collaborators keep getting killed, it's hard to get willing collaborators no matter how much money you offer.

See also.

Comment author: Ford 27 March 2011 01:45:53AM 0 points [-]

I see how the first part of my post could be read as "we need to motivate girls to go to school", which wasn't my intent. More a matter of motivating tradition-bound parents to see educated girls as a major source of income. But I understand that going to school can be risky in Taliban-dominated areas, which is why the second part of my post was all home-based and therefore hard for the Taliban to detect. Even so, I agree that any obvious link to the US government could be a problem.

Comment author: jimrandomh 15 April 2009 03:43:52AM 8 points [-]

Fortunately, this is a case where the least convenient possible world is quite unlike the real world, because modern wars are fought less with infantry and more with money and technology. As technology advances, military robots get cheaper, and larger portions of the military move to greater distances from the battlefield. If current trends continue, wars will be fought entirely between machines, until one side runs out of robots and is forced to surrender (or else fight man-vs-machine, which, in spite of what happens in movies, is probably fruitless suicide).

Comment author: billswift 16 April 2009 04:48:57PM 2 points [-]

The problem with this theory is that people in a poor country are a lot cheaper than cutting edge military robots. In a serious war, the U.S. would quickly run out of "smart bombs" and such. Military equipment is a pure consumption item, it produces nothing at all, so there is only going to be limited investment in it in peacetime. And modern high-tech military equipment requires long lead times for building up (unlike the situation in WWII).

Comment author: Vladimir_Nesov 16 April 2009 08:32:47PM 6 points [-]

Robots get cheaper and stronger over time, while people are a fixed parameter.

Comment author: Jonathan_Graehl 15 April 2009 06:41:56AM 2 points [-]

Edit: lottery won by two votes -> election.

I've heard you say a handful of times now: as justified by some decision theory (which I won't talk about yet), I one-box/cooperate. I'm increasingly interested.

Comment author: Z_M_Davis 15 April 2009 07:27:41AM 2 points [-]

Eliezer has yet to disclose his decision theory, but see "Newcomb's Problem and Regret of Rationality" for the general rationale. Also Wikipedia on Hofstadter's superrationality.

Comment author: rhollerith 15 April 2009 02:43:35AM *  2 points [-]

I agree with every sentence in this post. (And I read it twice to make sure.)

Comment author: PhilGoetz 15 April 2009 01:26:46AM *  4 points [-]

Good post.

Also, historically, evil barbarians regularly fall prey to some irrational doctrine or personal paranoia that wastes their resources (sacrifice to the gods, kill all your Jews, kill everybody in the Ukraine, have a cultural revolution).

We in the US probably have a peculiar attitude on the rationality of war because we've never, with the possible exception of the War of 1812, fought in a war that was very rational (in terms of the benefits for us). The Revolutionary war? The war with Mexico? The Civil War? The Spanish-American War? WWI? WWII? Korea? Vietnam? Iraq? None of them make sense in terms of self-interest.

(Disclaimer: I'm a little drunk at the moment.)

Comment author: CronoDAS 15 April 2009 03:55:52PM 3 points [-]

We stole an awful lot of land by fighting with the American Indians.

Comment author: gwern 15 April 2009 02:41:48AM 3 points [-]

I'm not going to dispute the others, but I kind of had the impression that we did pretty well out of the Mexican and Spanish-American wars; I mean, Texas's oil alone would seem to've paid for the (minimal) costs of those two, right?

Comment author: PhilGoetz 15 April 2009 03:08:07AM *  1 point [-]

In terms of national self-interest, yes. But they weren't causes that I'd personally risk death for.

I'm being inconsistent; I'm using the "national interest" standard for WW2, and the "personal interests" standard for these wars.

Comment author: knb 15 April 2009 06:40:20AM 2 points [-]

Well presumably most people don't actually risk their lives for the cause. They risk their lives for the prestige, power, money, or whatever. Fighting in a war is a good (but risky) way to gain respect and influence. Also there are social costs to avoiding the fight.

Comment deleted 15 April 2009 10:56:30AM [-]
Comment author: PhilGoetz 16 April 2009 03:32:17PM *  1 point [-]

So I should try to be irrational when I'm drunk?

Comment author: gwern 16 April 2009 05:18:07PM 8 points [-]

Well sure. Otherwise you're just wasting the alcohol!

Comment author: RickJS 22 April 2009 04:30:23PM *  1 point [-]

Consider (think WITH this idea for a while. There will be plenty of time to refute it later. I find that, if I START with, "That's so wrong!", I <b>really</b> weaken my ability to "pan for the gold".)

Consider that you are using "we" and "self" as a pointer that jumps from one set to another moment by moment. Here is a list of some sets that may be confounded together here, see how many others you can think of. These United States (see the Constitution)

the people residing in that set

citizens who vote

citizens with a peculiar attitude

the President

Congress

organizations (corporations, NGOs, political parties, movements, e-communities, etc.)

the wealthy and powerful

the particular wealthy and powerful who see an opportunity to benefit from an invasion

Multiple Edits: trying to get this site to respect line/ paragraph breaks, formatting. Does this thing have any formatting codes?

Comment author: thomblake 22 April 2009 07:22:48PM 1 point [-]

There's a "Help" link below / next to the comment box, and it respects much of the MarkDown standard. To put a single line break at the end of the line, just end the line with two spaces. Paragraph breaks are created by a blank line in-between lines of text.

Comment author: timtyler 15 April 2009 01:18:21AM 4 points [-]

For seeing someone's source code to act as a commitment mechanism, you have to be reasonably sure that what they show you really is their source code - and also that their source code is not going to be modified by another agent between when they show it to you, and when they get a chance to defect.

While it's possible to imagine these conditions being met, it seems non-trivial to imagine a society where they are met very frequently.

If agents face one-shot prisoner's dilemmas with each other very often, there are other ways to get them to cooperate - assuming that they have a communications channel. They could use public-key crypto to signal to each other that they are brothers - in a way that only a real brother would know how to do.

Signalling brotherhood is how our cells cooperate with each other. Cells can't use cryptography - so their signals can more easily be faked - but future agents will be in a better position there.

Comment author: blogospheroid 16 April 2009 08:09:29AM 3 points [-]

Voted up because dealing with uncooperative people is a necessary part of the art and war is the extreme of "uncooperative".

Comment author: Nanani 15 April 2009 01:03:13AM *  -2 points [-]

No. Just No.

A society of rational agents ought to reach the conclusion that they should WIN, and do so by any means necessary, yes? Then why not just nuke 'em? *

*replace 'nuke' with whatever technology is available; if our rationalist society has nanobots, we could modify them into something less harmful than barbarians.

Offer amnesty to barbarians willing to bandon their ways; make it as possible as we can for individual barbarians to defect to our side; but above all make sure the threat is removed. That's what constitutes winning.

Turning individual lottery-selected rationalists into "courageous soliders" is not the way to do that. That's just another way of losing.

Furthermore, the process of selecting soldiers by lottery is a laughably bad heuristic. An army of random individuals, no matter how much courage they have, is going to be utterly slaughtered by an army whose members are young, strong, fast, healthy, and all those other attributes. If the lottery is not random but instead gives higher weight to the individuals best fit to fight, then it is not different from the draft decried above.

This is a terrible post, the first one so awful that I felt moved to step out of the lurkersphere and comment on LW.

Comment author: Lawliet 15 April 2009 01:19:01AM *  14 points [-]

Don't assume the rationalists have super powerful technology.

Comment author: Eliezer_Yudkowsky 15 April 2009 01:30:38AM 5 points [-]

Furthermore, the process of selecting soldiers by lottery is a laughably bad heuristic. An army of random individuals, no matter how much courage they have, is going to be utterly slaughtered by an army whose members are young, strong, fast, healthy, and all those other attributes. If the lottery is not random but instead gives higher weight to the individuals best fit to fight, then it is not different from the draft decried above.

Yeah, that's a more complex issue - coordination among agents with different risk-bearing efficiencies. If you have an agent known to be fair or sufficiently rigorous rules of reasoning that you can verify fairness, then it's possible for everyone to know that they're taking "equal risk" in the sense of being at-risk for being recruited as a teenager. (But is that the same sort of equal risk as being recruited if you have genes for combat effectiveness?)

A society of rationalists would work it out, but it might be more complicated. And as Lawliet observes, you shouldn't assume you've got nukes and the Soviets don't.

Comment author: Technologos 15 April 2009 04:48:18PM 4 points [-]

Perhaps there is a reason that America (and other nuclear powers, but America most recently) doesn't just nuke its enemies. If the enemy group were truly a barbarian horde, with no sympathy generated from the remainder of the world, then perhaps rationalists would find it easier to nuke them. But in any other circumstance (which is to say, the Least Convenient Possible World), the things you described above would be useful (amnesty etc.). We only nuke 'em when that produces the best long-term outcome, including the repercussions of the use itself--such as the willingness of other countries to use such weapons for less defensive purposes.

The draft is objectionable not because it selects for the best soldiers but because it is overused, if I read the original post correctly. Proper use of the lottery/draft is only for directly defending the security of the original state, rather than projecting the whims of kings onto the world.

Comment author: Epiphany 29 September 2012 02:11:58AM 1 point [-]

A life spent on something less valuable than itself is wasted, just as money is squandered on junk. If you want to respect the value of your life, you must spend it on something more valuable to you than you are. If you invest your life into something more valuable than you are, you are not throwing it away, you are ensuring that it is spent wisely.

People sacrifice their best years passing their genes on, knowing that the continuation of the species is more valuable than those years, and they fight in war because freeing themselves and future generations from oppression is more valuable than living a life in slavery.

Most rationalists would see that dying to continue the rational way of life is better than investing their lives into living like a Barbarian after being conquered.

Not to mention the fact that if the rationalists didn't fight (say they left the area, or surrendered), that would encourage the Barbarians to push them around. After the Barbarians plundered their village, they'd look for a new target and, knowing that rationalists run, the rationalists would be particularly appealing, so they'd be targeted again. Make yourself an easy target and the Barbarians may plunder you so often you can't survive in any case. Running away from that type of problem does not solve it.

Comment author: DanielLC 03 May 2013 06:14:09AM 0 points [-]

They might be rational egoists. They don't think anything is more valuable than themselves, since they are all they value.

Comment author: John_Maxwell_IV 15 April 2009 11:48:07PM 0 points [-]

You didn't mention in the Newcomb's Problem article that you're a one-boxer.

As a die-hard two-boxer, perhaps someone can explain one-boxing to me. Let's say that Box A contains money to save 3 lives (if Omega thinks you'll take it only) or nothing, and Box B contains money to save 2 lives. Conditional on this being the only game Omega will ever play with you, why the hell would you take Box A only?

I suspect what all you one-boxers are doing is that you somehow believe that a scenario like this one will actually occur, and you're trying to broadcast your intent to one-box so Omega will put money in for you.

Comment author: Larks 04 August 2009 12:47:50PM 1 point [-]

Imagine Omega's predictions have a 99.9% success rate, and then work out the expected gain for one-boxers vs two-boxers.

By stepping back from the issue and ignoring the 'can't change the contents now' issue, you can see that one-boxers do much better than two-boxers, so as we want to maximise our expected payoff, we should become one-boxers.

Not sure if I find this convincing.

Comment author: John_Maxwell_IV 05 August 2009 05:21:00PM 5 points [-]

I posted that comment four or five months ago. I'm a one-boxer now, haha. Figure that you can either choose to always one-box or choose to pretend like you're going to one-box but actually two-box. Omega is assumed to be able to tell the difference, so the first option makes more sense.

Comment author: William 16 April 2009 08:53:12PM 0 points [-]

I can choose through the composition of my mind to save 3 lives by wanting to refuse to take the money to save 2 lives. Or I can choose to save the two lives and thus not get 3 lives. Why the hell would I take both boxes?

Comment author: John_Maxwell_IV 17 April 2009 05:19:49PM 0 points [-]

I guess that makes sense. If you have the option of choosing what the composition of your mind is.

Comment author: William 19 April 2009 03:27:44AM 0 points [-]

"Composition of my mind" is a bad phrase for it, but what I mean is that I have a collection of neurons that say "I'm a one-boxer" or similar.

Comment author: PhilGoetz 16 April 2009 03:25:18AM 0 points [-]

You can find several long discussions of this on Overcoming Bias, and in earlier posts on Less Wrong.

Comment author: taw 15 April 2009 05:01:07AM -1 points [-]

I'm wondering why you wrote this article. (gensym) you're describing and assigning the name "war" has virtually nothing to do with any real world "war" situations, so you could as well describe it as a thought experiment, or use some less loaded metaphor.

Too high connotation to denotation ratio for me.

Comment author: JulianMorrison 15 April 2009 12:19:41PM 1 point [-]

virtually nothing to do with any real world "war" situations

Why do you say that?

Comment author: orthonormal 15 April 2009 11:16:18PM 0 points [-]

I think the point of this post is simply to confront head-on a scenario that's taken to be a reductio ad absurdam of "rationalism shouldn't lose to irrationalism". I don't see this as intended for practical purposes under conceivable circumstances.

Comment author: [deleted] 07 August 2012 07:10:30AM 0 points [-]

I found this post very disturbing, so I thought for a bit about why. It reads very much like some kind of SF dystopia, and indeed if it were necessary to agree to this lottery to be part of the hypothetical rationalist community/country, then I wouldn't wish to be a part of it. One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same). No government should have the right to compel its citizens to become soldiers, and that's what it would become, after the first generation, unless you're going to choose to exile anyone who reaches adulthood there and then opts out.

Offering financial incentives for becoming a soldier, as has already been discussed in the comments, seems a fairer idea. Consider also that the more objectively evil the Evil Barbarians are, the more people will independently decide that fighting is the better decision. If not enough people support your war, maybe that in itself is a sign that it's not a good idea. If most of the rationalists would rather lose than fight, that tells you something.

It's quite difficult to know the right tone of response to take here - the Evil Barbarians are obviously pure thought-experiment, but presumably most of us would view a rationalist country as a good thing. Not if it made decisions like this, though. Sacrificing the individual for the collective isn't always irrational, but it needs to be the individual who makes that choice based on his or her own values, not due to some perceived social contact. Otherwise you might as well be sacrificed to make more paperclips.

If it was intended as pure metaphor, it's a disquieting one.

Comment author: [deleted] 07 August 2012 10:03:42AM 1 point [-]

Oh, my first downvote. Interesting. Bad Leisha, you've violated some community norm or other. But given that I'm new here and still trying to determine whether or not this community is a good fit for me, I'm curious about the specifics. I wonder what I did wrong.

Necroposting? Disagreeing with the OP? Taking the OP too literally and engaging with the scenario? Talking about my emotional response or personal values? The fact that I do value individual liberty over the collective? Some flaw in my chain of reasoning? (possible, but if so, why not point it out directly so that I can respond to the criticism?)

Note: This post is a concerted rational effort to overcome the cached thought 'oh no, someone at LW doesn't like what I wrote :( ' and should be taken in that spirit.

Comment author: RichardKennaway 07 August 2012 10:42:02AM *  8 points [-]

Oh, my first downvote. Interesting. Bad Leisha, you've violated some community norm or other. But given that I'm new here and still trying to determine whether or not this community is a good fit for me, I'm curious about the specifics. I wonder what I did wrong.

A single downvote is not an expression of a community norm. It is an expression by a single person that there was something, and it could be pretty much anything, about your post that that one person did not like. I wouldn't worry until a post gets to -5 or so, and -1 isn't very predictive that it will.

Note: This post is a concerted rational effort to overcome the cached thought 'oh no, someone at LW doesn't like what I wrote :( ' and should be taken in that spirit.

The "someone at LW doesn't like what I wrote" part is accurate. You don't need the "oh no" and ":(" parts. Just because someone disagrees with you, doesn't mean that you are wrong.

Personally (and I did not vote on your post either way), I don't think you are quite engaging with the problem posed, which is that each of these hypothetical rationalists would rather win without being in the army themselves than win with being in the army, but would much prefer either of those to losing the war. Straw-man rationality, which Eliezer has spent many words opposing, including these ones, would have each rationalist decline to join up, leaving the dirty work to others. The others do the same, and they all lose the war. The surviving rationalists under occupation by barbarians then get to moan that they were too smart to win. But rationality that consistently loses is not worth the name. It is up to rationalists to find a way to organise collective actions that require a large number of participants for any chance of success, but which everyone would rather leave to everyone else.

Some possible ways look like freely surrendering, for a while, some of one's freedom. A general principle that Freedom is Good has little to say about such situations.

Comment author: Wei_Dai 07 August 2012 07:07:42PM 4 points [-]

A single downvote is not an expression of a community norm. It is an expression by a single person that there was something, and it could be pretty much anything, about your post that that one person did not like.

It's not just one person though. Having -1 points also means that nobody else thought it deserves more than that, or at least it's not worth their effort to vote it back up to 0. So if you have reason to think the comment has been read by more than a few people after it was downvoted, even -1 points does reflect the community judgement to some extent.

Comment author: Vaniver 07 August 2012 07:08:42PM 1 point [-]

Indeed, my quality threshold to upvote comments at -1 is much lower than my quality threshold to upvote comments at 0.

Comment author: metaphysicist 07 August 2012 08:00:07PM 0 points [-]

What function describes your threshold as the negative values go below -1?

Comment author: Vaniver 07 August 2012 09:06:33PM 0 points [-]

Generally, the only types of comments that are below -3 that I upvote are ones which I think add a perspective to the conversation which should be there but should have a different proponent. It's rare that I find a comment at less than -3 which I would fully endorse (but I have my settings set to display all comments).

Comment author: [deleted] 07 August 2012 12:34:07PM 1 point [-]

Thank you for your response! It does help to be able to discuss these things, even if it seems a little meta.

A single downvote is not an expression of a community norm.

Point taken.

The "someone at LW doesn't like what I wrote" part is accurate. You don't need the "oh no" and ":(" parts.

Sure, I don't need them. I included them as evidence of the type of flawed thinking I'm trying to get away from (If you're familiar with Myers-Briggs, I'm an F-type trying to strengthen her T-function. It doesn't come naturally).

Personally (and I did not vote on your post either way), I don't think you are quite engaging with the problem posed...

You're right. I noted that problem, but evaluated it as being less significant than the specifics of the extended example, which struck me as both morally suspect and, in a sense, odd: it didn't seem to fit with the tone of most of the other posts I've read here. See my reply to dbc for more on that.

It is up to rationalists to find a way to organise collective actions that require a large number of participants for any chance of success, but which everyone would rather leave to everyone else.

I agree. I'd add that those actions need to be collectively decided, but I agree with the principle.

Comment author: robertskmiles 07 August 2012 09:48:18PM *  1 point [-]

One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly

A very sensible value in a heterogenous society, I think. But in this hypothetical nation, everyone is a very good rationalist. So they all, when they shut up and multiply, agree that being a soldier and winning the war is preferable to any outcome involving losing the war, and they all agree that the best thing to do as a group is to have a lottery, and so they all precommit to accepting the results.

No point in giving people the liberty to make their own individual decisions when everyone comes to the same decision anyway. Or more accurately, the society is fully respecting everyone's individual autonomy, but due to the very unlikely nature of the nation, the effect ends up being one of 100% compliance anyway.

Comment author: dbc 07 August 2012 09:53:15AM 1 point [-]

One of my core values is liberty - that means the ability of each individual to make his or her own decisions and live his or her life accordingly (so long as it's not impeding anyone else's right to do the same)

How do you feel about desertion?

Comment author: [deleted] 07 August 2012 10:12:26AM 0 points [-]

It's psychologically understandable, but morally wrong, provided the deserter entered into an uncoerced agreement with the organization he or she is deserting. If you know the terms before you sign up, you shouldn't renege on them.

In cases of coercion or force (e.g. the draft) desertion is quite justified.

Comment author: dbc 07 August 2012 10:30:08AM *  0 points [-]

The topic of this article is how rational agents should solve a particular tragedy of the commons. Certainly, a common moral code is one solution to this problem: an army will have no deserters if each soldier morally refuses to desert. I don't want to put words in your mouth, but you seem to think that common morality is the best, or perhaps only solution.

I think Eliezer is more interested in situations where this solution is impractical. Perhaps the rationalists are a society composed of people with vastly differing moral codes, but even in this case, they should still be capable of agreeing to coordinate, even if that means giving up things that they individually value.

Comment author: [deleted] 07 August 2012 12:22:06PM *  0 points [-]

Yes, I see a common moral framework as a better solution, and I would also assert that a group needs at least a rudimentary version of such a framework in order to maintain cohesion. I assumed that was the case here.

The rational solution to the tragedy of the commons is indeed worth discussing. However, in this case the principle behind the parable was obscured due to its rather objectionable content. I focused on the specifics as they remained more fixed in my mind after reading than the underlying principle. A less controversial example such as advertising or over-grazing would have prevented that outcome.

I know that's a personal preference, though, and it seems to be a habit of Eliezer's to choose extreme examples on occasion - I ran into the same problem with Three Worlds Collide. It's an aspect of his otherwise very valuable writing that I find detracts from, rather than illuminates the points he's making. I recognize that others may disagree.

With that in mind, I'm happy to close this line of discussion on the grounds that it's veering off-topic for this thread.

Comment author: DanielLC 03 May 2013 06:17:55AM 0 points [-]

In the least convenient possible world, in which winning like this and losing are the only options, does losing the war to barbarian invaders really bring more liberty than being drafted into a war?

Comment author: AndySimpson 15 April 2009 11:39:37AM *  0 points [-]

This is a thoughtful, thorough analysis of some of the inherent problems with organizing rational, self-directing individuals into a communal fighting force. What I don't understand is why you view it as a special problem that needs a special consideration.

Society is an agreement among a group of people to cooperate in areas of common concern. The society as one body defends the personal safety and livelihood of its component individuals and it furnishes them with certain guarantees of livability and fair play. In exchange, the component individuals pledge to defend the integrity of the society and contribute to it with their labor and ingenuity. This happens and it works because Pareto improvements are best achieved through long-term schemes of cooperation rather than one-off interactions. The obligation to collective defense, then, happens at the moment of social contract and it needs no elaboration. Even glancingly rational people in pseudo-rational societies recognize this on some level, and when society is threatened, they will go to its defense. So, there is no real incentive to defect against society when there is a draft to fight an existential threat because the gains of draft-dodging are greatly outweighed by the risk of the fall of civilization.

I think you go too far in saying that modern drafts are "a tool of kings playing games in need of toy soldiers." The model of the draft can be abused, as it was in the US during the Vietnam War, where there was no existential threat and draft-dodging was the smart move, but it worked remarkably well during World War II when a truly threatening horde of barbarians did emerge.

Along these lines, why is it that a lottery and chemical courage "is the general policy that gives us the highest expectation of survival?" Why couldn't we do the job with traditional selective-service optimization for fitness, intelligence, and psychological stability, coupled with the perfectly rational understanding that risking life in combat is better than guaranteeing societal collapse by running from battle?

Reading through your post, especially your suggestions for a coordinated response, I found myself thinking about the absurd spectacle of the Army of Mars in Kurt Vonnegut's Sirens of Titan. New soldiers could get any kind of ice cream they wanted, right after their memories were wiped and implants were installed to beam the persistent "rent, rent, rented-a-tent" of a snare drum to their mind whenever they were made to march in formation. Somehow I don't think Vonnegut was suggesting an improvement.

Comment author: matt 15 April 2009 10:47:58PM 3 points [-]

"social contract" [shudders], I don't remember signing that one.

A "social contract" binding individuals to make self-sacrificing decisions doesn't seem necessary for a healthy civilization. See David D. Friedman's Machinery of Freedom for details; for a very (very) brief sketch consider that truck drivers rationally risk death on the roads for pay and that mercenaries face a higher risk of death for more pay - and that merchants will pay both truck drivers and soldiers for their services.

Soldiery doesn't have to be a special case requiring different rational rules.

Comment author: AndySimpson 16 April 2009 07:51:13AM 0 points [-]

What army of free-market mercenaries could seriously hope to drive the modern US Armed Forces, augmented by a draft, to capitulation? Perhaps more relevantly, what army of free-market mercenaries could overcome the fanatical, disciplined mass of barbarians?

What I'm inferring from your comment is that a rational society could defend itself using market mechanisms, not central organization, if the need ever arose. Those mechanisms of the market might do well in supplying soldiers to meet a demand for defense, but I'm skeptical of the ability of the blind market to plan a grand strategy or defeat the enemy in battle. It's also very difficult to take one's business elsewhere when you're hiring men with guns to stop an existential threat and they don't do a good job of it. In order to defend a society, first there must be understanding that there is a society and that it's worth defending.

Comment author: mattnewport 16 April 2009 08:01:05AM 1 point [-]

Those mechanisms of the market might do well in supplying soldiers to meet a demand for defense, but I'm skeptical of the ability of the blind market to plan a grand strategy or defeat the enemy in battle.

Plenty of private corporations seem to do quite well at grand strategy and defeating enemies in market competition. It doesn't seem a huge stretch to imagine them achieving similar success in battle. Much of military success comes down to logistics and I think a reasonable case can be made that private corporations already demonstrate greater competence in that area than most government enterprises.

Comment author: PhilGoetz 15 April 2009 07:47:57PM *  -1 points [-]

The example of Athens vs. Sparta is our best datapoint: It pitted the ancient world's most rational society against the ancient world's greatest warrior society, well-controlled in terms of wealth, technology, geography, and genetics. Their war was evenly matched, but Sparta won in the end.

Sparta was 1/3 the size of Athens+Attica (100,000 vs. 300,000), with only 1/5 as many citizens (8,000 vs 40,000).

Comment author: matt 15 April 2009 10:08:20PM *  5 points [-]

Not a very good data point. Athens at that time was not a community of rationalists. Xenophon's March of the 10000 or Thucydides History of the Peloponesian War are both fairly readable classical sources for the extreme stupidity (even by modern democratic standards) of the Athenian democratic process. And their army voted for autocratic undercommanders who then had life and death power over the troops. The distant Athenian democracy voted for autocratic overcommanders.

Comment author: PhilGoetz 15 April 2009 11:45:03PM *  3 points [-]

I didn't say it was very good. I said it was our best. The world has never had a country of rationalists.

If the point of the original post was that a society of inhumanly-rational people can win wars, then it's of limited applicability at present. I'm assuming that we're talking about IQ 100 Bayesians. (Which may be an empty set.)

Comment author: Squark 10 May 2013 09:27:11PM 1 point [-]

I don't understand the assumption that each rationalist prefers to be a civilian while someone else risks her life. They can be rational and use a completely altruistic utility function that values all people equally a priori. The strongest rationalist society is the rationalist society where everyone have the same terminal values (in an absolute rather than relative sense).

Comment author: Qiaochu_Yuan 10 May 2013 09:45:44PM *  0 points [-]

That isn't an assumption Eliezer is making, it's an assumption he's attacking.

Comment author: Squark 11 May 2013 07:04:19PM 0 points [-]

It doesn't look like it:

Imagine a community of self-modifying AIs who collectively prefer fighting to surrender, but individually prefer being a civilian to fighting. One solution is to run a lottery, unpredictable to any agent, to select warriors. Before the lottery is run, all the AIs change their code, in advance, so that if selected they will fight as a warrior in the most communally efficient possible way—even if it means calmly marching into their own death...

You can have lotteries for who gets elected as a warrior. Sort of like the example above with AIs changing their own code. Except that if "be reflectively consistent; do that which you would precommit to do" is not sufficient motivation for humans to obey the lottery, then...

...well, in advance of the lottery actually running, we can perhaps all agree that it is a good idea to give the selectees drugs that will induce extra courage, and shoot them if they run away. Even considering that we ourselves might be selected in the lottery. Because in advance of the lottery, this is the general policy that gives us the highest expectation of survival.

Eliezer is analyzing the situation as a Prisoner's Dilemma: different players have different utility functions. This analysis would be completely redundant in a society where everyone have the same utility function (or at least sufficiently similar / non-egocentric utility functions). In such a society there wouldn't be a need for a lottery: the soldiers would be those most skilled for the job. There would be no need for drugs / shooting deserters: the soldiers would want to fight because the choice to fight would be associated with positive expected utility (even if it means high likelihood of death).

Comment author: mattnewport 15 April 2009 06:15:04AM 1 point [-]

Perhaps slightly off topic, but I'm skeptical of the idea that two AIs having access to each other's source code is in general likely to be a particularly strong commitment mechanism. I find it much easier to imagine how this could be gamed than how it could be trustworthy.

Is it just intended as a rhetorical device to symbolize the idea of a very reliable pre-commitment signal (in which case perhaps there are better choices because it doesn't succeed at that for me, and I imagine would raise doubts for most people with much programming experience) or is it supposed to be accepted as highly likely to be a very reliable commitment signal (in which case I'd like to see the reasoning expanded upon)?

Comment author: PhilGoetz 16 April 2009 03:24:10AM 0 points [-]

But reversed stupidity is not intelligence. And reversed evil is not intelligence either. It remains true that real wars cannot be won by refined politeness. If "rationalists" can't prepare themselves for that mental shock, the Barbarians really will win; and the "rationalists"... I don't want to say, "deserve to lose". But they will have failed that test of their society's existence.

Are you assuming that niceness (not torturing people, not killing civilians) is correlated with rationality?

Comment author: Larks 04 August 2009 12:35:53PM 1 point [-]

To the extent that we all have common values, rationality should correlate to achieving those values: so if niceness is a general value, a rationalist community should be nice (or gain enough of another value to make up for the loss).

If niceness is not a reasonably-universal value, empirically our understanding of niceness seems to correlate with rationality.

Comment author: loqi 15 April 2009 01:13:01AM 0 points [-]

there's nothing in the rules of probability theory or decision theory that says you can't sacrifice yourself for a cause

Indeed. I wonder what the "expected utility of future selves" crew makes of this.

Comment author: Will_Sawin 20 June 2010 01:35:18AM -1 points [-]

I know this post is long, long dead but:

if they have common knowledge of each other's source code.

Isn't this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...

Alternatively, I'm considering all the strategies I could use, based on looking at my opponent's strategy, and one of them is "Cooperate only if the opponent, when playing against himself, would defect."

"Common knowledge of each other's rationality" doesn't seem to help. Knowing I use TDT doesn't give someone the ability to make the same computation I do, and so engage TDT. They have to actually look into my brain, which means they need a bigger brain, which means I can't look into their brain. If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I'm going to defect. And win. Rationalists should win.

Comment author: wedrifid 20 June 2010 02:55:31AM 5 points [-]

Knowing I use TDT doesn't give someone the ability to make the same computation I do, and so engage TDT.

It is possible to predict the output of a system without emulating the system. We can use the idea 'of emulating their behavior' if it helps as an intuition pump but to assume that it is required is a mistake.

If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I'm going to defect. And win.

Why on earth would I cooperate with you? You just told me you were going to defect!

(But I do respect your grappling with the problem. It is NOT trivial. Well, I should say it is trivial but it is hard to get your head around it, particularly with our existing intuitions.)

Comment author: DanielLC 03 May 2013 06:21:15AM *  3 points [-]

A = "Preceded by it's own quotation with A's and B's swapped is B's source code" preceded by it's own quotation with A's and B's swapped is B's source code. B = "Preceded by it's own quotation with B's and A's swapped is A's source code" preceded by it's own quotation with B's and A's swapped is A's source code.

A and B each now contain the other's source code.

Edit: I used "followed" when it should have been "preceded".

Comment author: wedrifid 20 June 2010 02:49:28AM 3 points [-]

Isn't this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...

No. If you know all relevant data yourself you don't have to know it again just because B knows it. That is just a naive, inefficient way to implement the 'source code'. Call the code 'DRY' for example. Or consider it an instruction to do a 'shallow copy' and a 'memory free' after a getting a positive result for a 'deep compare'.

Comment author: Qiaochu_Yuan 03 May 2013 06:45:51AM *  2 points [-]

Isn't this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...

The idea is that A and B are passed each other's source code as input (and know their own source code thanks to that theorem that guarantees that Turing machines have access to their own source code WLOG, which I think DanielLC's comment proves). There's no reason you can't do this, although you won't be able to deduce whether your opponent halts and so forth.

Alternatively, I'm considering all the strategies I could use, based on looking at my opponent's strategy, and one of them is "Cooperate only if the opponent, when playing against himself, would defect."

Your opponent might not halt when given himself as input.

Comment author: MinibearRex 27 March 2011 02:16:48PM 1 point [-]

If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I'm going to defect. And win. Rationalists should win.

The problem with your plan is that TDT agents don't always cooperate. I will only cooperate if I have reason to believe that you and I are similar enough that we will decide to do the same thing for the same reasons. I hate to burst your bubble, but you are not the first person in all of recorded history to think of this. Other people are allowed to be smart too. If you come up with a clever reason to defect when playing against me, it is very possible (perhaps even likely, although I don't know you all that well) that I will think of it too.

Comment author: Indon 16 May 2013 10:22:29PM *  0 points [-]

I think that's an understatement of the potential danger of rationality in war. Not for the rationalist, mind, but for the enemy of the rationalist.

Most rationality, as elaborated on this site, isn't about impassively choosing to be a civilian or a soldier. It's about becoming less vulnerable to flaws in thinking.

And war isn't just about being shot or not shot with bullets. It's about being destroyed or not destroyed, through the exploitation of weaknesses. And a great deal of rationality, on this very site, is about how to not be destroyed by our inherent weaknesses.

A rationalist, aware of these vulnerabilities and wishing to destroy a non-rationalist, can directly apply their rationality to produce weapons that exploit the weaknesses of a non-rationalist. Their propaganda, to a non-rationalist, can be dangerous, and the techniques used to craft it nigh-undetectable to the untrained eye. Weapons the enemy doesn't even know are weapons, until long after they begin murdering themselves because of those weapons.

An easy example would be to start an underground, pacifistic religion in the Barbarian nation. Since the barbarians shoot everyone discovered to profess it, every effort to propagate the faith is directly equivalent to killing the enemy (not just that, but even efforts to promote paranoia about the faith also weaken enemy capability!). And what defense do they have, save for other non-rationalist techniques that dark side rationality is empowered to destroy through clever arguments, created through superior understanding?

And we don't have to wait for a Perfect Future Rationalist to get those things either. We have those weapons right now.

Comment author: JulianMorrison 15 April 2009 11:39:36AM -1 points [-]

I'm reminded of the Iain M. Banks Culture in its peaceful and militant modes.

It would be really interesting to brainstorm how to improve a military. The conventional structure is more-or-less an evolved artifact, and it has the usual features of inefficiency (the brainpower of the low ranks is almost entirely wasted) and emergent cleverness (resilience to org-chart damage and exploitation of the quirks of human nature to create more effective soldiers). Intelligent design ought to be able to do better.

Here's one to get it started: how about copying the nerve structure in humans and have separate parallel afferent and efferent ranks? That is, a chain of command going down, and a chain of analysis going up.

Comment author: ChrisHibbert 15 April 2009 05:39:02PM 3 points [-]

I think there's more contribution from the bottom up in a modern well-functioning military than you realize. One of the obstacles the US military's trainers face in teaching in other countries is getting officers to listen to their subordinates. In small units, successful leaders listen to their troops, in larger units, officers listen to their subordinates.

But in all those cases, there comes a time when the leader is giving orders, and at that point, the subordinates are trained to follow. The system doesn't work if it doesn't insist that the leader gets to decide when it is time to give orders.

But effectiveness comes from leaders who listen since, as you said, there are many more sensors at the edges of the org chart. The Culture is good at many things, but Banks doesn't show small-unit operations in which the leader gains by listening.

Comment author: JulianMorrison 15 April 2009 07:35:52PM 0 points [-]

Ah, you miss what I was aiming at. The "sensory" ranks don't give orders. They're an upward ideas pump. Rank in the two modes is orthogonal. The "motor" ranks command as normal. High ranks in both listen, but to different things. The "motor" leader wants to know where the enemy are and if the men have a bright tactical idea. The "sensory" collator might be more interested in a clever strategic analysis, a way to shorten the supply chain, or a design for better field camouflage.

Comment author: ChrisHibbert 16 April 2009 06:37:36PM 2 points [-]

If I understand you, I think that's part of what is supposed to happen, though the communication is more lateral than I said at first. In addition to ideas going from the troops to their sergeants and from squad leaders to their commanders, new innovations spread from squad-to-squad.

After D-Day, the tactics required to get through narrow lanes surrounded by hedge rows were developed by individual tank teams, and tank groups picked up successful ideas from each other. In Iraq, methods for detecting ambushes and IEDs weren't developed at headquarters and promulgated from the top down, they arose as the result of experiment and spread virally.

There may be an advantage to having specialists who are looking for that kind of idea and for ways of spreading it, but I'd go with the modern management practice of empowering everyone and encouraging innovation by everyone who is in contact with the enemy. In business, it's good for morale, and in most arenas it multiplies the number of brains trying to solve problems and trying to steal good ideas.