Lumifer comments on Thoughts on minimizing designer baby drama - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
No. You can force people to do something you want. That's not cooperation at all, that's just plain-vanilla coercion.
I'm using the word "cooperate" in the technical sense of "cooperate in a prisoner's dilemma". In this sense it's possible for an outside force to coerce cooperation, in the same way that e.g. the government forces your neighbor to cooperate rather than defect and steal your stuff, or anti-doping agencies force athletes to cooperate in the prisoner's dilemma of whether to use performance-enhancing drugs.
For the technical sense of "cooperate in a prisoner's dilemma" you need to have a prisoner's dilemma situation to start with. Once you coerce cooperation you have effectively changed the payoffs in the matrix -- the "defect" cell now has a huge negative number in it, that's what coercion means. It's not a prisoner's dilemma any more.
Huh? Why do you think I'm in a prisoner's dilemma situation with my neighbour?
If you make your child taller, your child is better off (+competitive advantages, -other disadvantages of being taller) and your neighbor's child is worse off (-competitive advantages).
If your neighbor makes his child taller, his child is better off and yours is worse off.
If you both make your children taller, the competitive advantages cancel out and you each have only the disadvantages.
Being tall is not a disadvantage even if you take away "competitive advantages" (normally tall, not freakishly tall). An arms race is a different situation that a prisoner's dilemma.
The original claim was that the neighbor might "steal your stuff" which isn't a prisoner's dilemma either.
And most importantly, I do have neighbors. I don't feel I am in a prisoner's dilemma situation with them and I suspect they don't feel it either.
Because the government altered the payoff matrix making cooperation individually preferable to defection.
Imagine you were a hunter-gatherer: within your tribe, a system of reputation and customs, with associated punishments for defectors, tended to enforce cooperation, but different tribes occupying in neighboring areas typically recognized no social obligations towards each other, and as a result all encounters were tense and very often violent, warfare and marauding were endemic.
With a modern government you can interact with most strangers from your country or most other countries with a reasonable expectation that the interaction will be peaceful and productive.
It wasn't a prisoner's dilemma to start with. Hunter-gatherers do not live in a constant prisoner's dilemma situations.
I don't get the LW's obsession with the prisoner's dilemma. It's a very specific kind of situation, rare in normal life. If you have a choice between cooperation and non-cooperation that does not automatically mean you're in a prisoner's dilemma.
Hunter A steals Hunter B's kills/wives/whatever. Defection pays off. Cooperation always pays more overall, defection pays the defector better. "Government" in this case is tribal; we'll kill or exile defectors. (Exile is probably the genetically preferable option, since it may result in some of your genes being spread to other tribes, assuming you share more genetics with in-tribe than with out-tribe individuals; a prisoner's dilemma in itself.)
Pretty much every situation in real life involves some variant on the prisoner's dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.
Nonsense. First, cooperation does not always pay more, and second, the whole point of the prisoner's dilemma is that cooperation pays each agent better, conditional on them cooperating. "Overall" is a very nebulous concept, anyway, unless you take the hard utilitarian position and start adding up utils.
If cooperation were that beneficial, unconditional cooperation would have been hardwired in our genes.
Nope, I strongly disagree. To take a trivial example, Alice doesn't steal Bob's car because she thinks she'll be caught and sent to prison. Alice is NOT "cooperating" with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner's dilemma.
"Overall" means "Combining the utility-analog of both parties", not "More utility-analog for a given party". With only one hunter, there are fewer kills/less meat overall, at the least.
The incentives are the product of breaking the prisoner's dilemma - the "government altered the payoff matrix" and all that. Etiquette, ethics, and law are all increasing levels of rules, and punishment for those rules, whose core purpose is to alter payoffs for defection; from as subtle as the placement of utensils at a dinner table to prohibit subtle threats to other guests, and less desirable seat placements as punishments for not living up to standards of etiquette, to shooting somebody for escalating a police situation one time too many in an attempt to escape punishment.
Chicken comes up fairly often and there mutual defection is by far the worst outcome for either party (i.e. if you knew the other guy wanted to defect, you'd cooperate).
In an even simpler case, if you are a business, trying to cooperate instead of "defecting" will get you charged with anti-trust violations.
True. But challenging somebody to a Chicken-like game in the first place can be modeled as a Defection in a prisoner's dilemma; you win if they Cooperate and refuse, and both of you are worse off if they also Defect, and agree to the game.
Prisoner's dilemma is the simplest idealized form of all scenarios where a group of agents prefer that everyone cooperates with each other rather than everyone defects to each other, but for each individual agent, whatever the other agents do, it has an incentive to defect.
There are other common types of scenarios, of course: in zero-sum scenarios cooperation is not possible: a hunter and their prey can't cooperate to split calories between each other in a way that benefits both.
In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn't reach point B.
These scenarios are trivial from a game-theoretical perspective. The simplest and arguably the most practically relevant scenario where coordination is beneficial but can't be trivially achieved is the prisoner's dilemma.
Stag hunts (which are not the same as the hunter/prey scenarios discussed elsewhere in this thread) are another theoretically nontrivial category of coordination games with interesting social/behavioral implications -- arguably more than the prisoner's dilemma, though that probably depends on what kind of life you happen to find yourself in. I don't know why they don't get much exposure on LW, but it might have something to do with the fact that they don't have the PD's historical links to AI.
I agree that Stag hunt is theoretically and practically interesting, but I would say that it is not as interesting as the Prisoner's dilemma.
In order to "solve" a Stag hunt (in the sense of realizing the Pareto-optimal outcome), all you need is a communication channel between the players, even an one-shot one-way channel suffices.
In a Prisoner's dilemma, communication is not enough, you need either to iterate the game or to modify the payoff matrix.
There are other games that have significant practical applicability, such as Chicken/Volunteer's dilemma and Ultimatum.
I'm not aware of these links, do you have a reference?
I understand that the prisoner's dilemma is interesting and non-trivial from the game-theoretic perspective. That does not contradict my point that it's rare in normal life and that most choices people actually make are not in this framework.
Unless the object weighs exactly enough that it requires both of their full strength to move, then they both have an incentive to defect (not to put their full effort in, and let the other work harder). Mutual defection then results in the object not reaching point A.
Most scenarios involve some variation. Even the hunter-prey scenario; the herd or hunters could deliberately choose a sacrifice, saving both hunters and prey from running and expending additional calories on all sides, and reducing the number of prey animals, overall, that the hunters would need to eat. (Consider a real-life example of this - human herders, and their herds. Human-herd relationships are more complex than that, but it could be modeled that way.)
Actually some of the disadvantages of being tall would disappear (in the longish run) if everybody was tall. For example, if the average person was 1.90 m, cars would be designed accordingly and wouldn't be as uncomfortable for people 1.90 m tall.