Lumifer comments on Thoughts on minimizing designer baby drama - Less Wrong

17 [deleted] 12 May 2015 11:22AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (194)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 13 May 2015 03:00:33PM 2 points [-]

I'm using the word "cooperate" in the technical sense of "cooperate in a prisoner's dilemma". In this sense it's possible for an outside force to coerce cooperation

For the technical sense of "cooperate in a prisoner's dilemma" you need to have a prisoner's dilemma situation to start with. Once you coerce cooperation you have effectively changed the payoffs in the matrix -- the "defect" cell now has a huge negative number in it, that's what coercion means. It's not a prisoner's dilemma any more.

in the same way that e.g. the government forces your neighbor to cooperate rather than defect and steal your stuff

Huh? Why do you think I'm in a prisoner's dilemma situation with my neighbour?

Comment author: Jiro 15 May 2015 02:32:26PM 2 points [-]

Huh? Why do you think I'm in a prisoner's dilemma situation with my neighbour?

If you make your child taller, your child is better off (+competitive advantages, -other disadvantages of being taller) and your neighbor's child is worse off (-competitive advantages).

If your neighbor makes his child taller, his child is better off and yours is worse off.

If you both make your children taller, the competitive advantages cancel out and you each have only the disadvantages.

Comment author: Lumifer 15 May 2015 03:00:27PM 2 points [-]

Being tall is not a disadvantage even if you take away "competitive advantages" (normally tall, not freakishly tall). An arms race is a different situation that a prisoner's dilemma.

The original claim was that the neighbor might "steal your stuff" which isn't a prisoner's dilemma either.

And most importantly, I do have neighbors. I don't feel I am in a prisoner's dilemma situation with them and I suspect they don't feel it either.

Comment author: V_V 15 May 2015 05:12:04PM 0 points [-]

And most importantly, I do have neighbors. I don't feel I am in a prisoner's dilemma situation with them and I suspect they don't feel it either.

Because the government altered the payoff matrix making cooperation individually preferable to defection.

Imagine you were a hunter-gatherer: within your tribe, a system of reputation and customs, with associated punishments for defectors, tended to enforce cooperation, but different tribes occupying in neighboring areas typically recognized no social obligations towards each other, and as a result all encounters were tense and very often violent, warfare and marauding were endemic.

With a modern government you can interact with most strangers from your country or most other countries with a reasonable expectation that the interaction will be peaceful and productive.

Comment author: Lumifer 15 May 2015 05:49:41PM 2 points [-]

Because the government altered the payoff matrix making cooperation individually preferable to defection.

It wasn't a prisoner's dilemma to start with. Hunter-gatherers do not live in a constant prisoner's dilemma situations.

I don't get the LW's obsession with the prisoner's dilemma. It's a very specific kind of situation, rare in normal life. If you have a choice between cooperation and non-cooperation that does not automatically mean you're in a prisoner's dilemma.

Comment author: OrphanWilde 15 May 2015 06:00:00PM 0 points [-]

Hunter A steals Hunter B's kills/wives/whatever. Defection pays off. Cooperation always pays more overall, defection pays the defector better. "Government" in this case is tribal; we'll kill or exile defectors. (Exile is probably the genetically preferable option, since it may result in some of your genes being spread to other tribes, assuming you share more genetics with in-tribe than with out-tribe individuals; a prisoner's dilemma in itself.)

Pretty much every situation in real life involves some variant on the prisoner's dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.

Comment author: Lumifer 15 May 2015 07:51:39PM *  2 points [-]

Cooperation always pays more overall, defection pays the defector better.

Nonsense. First, cooperation does not always pay more, and second, the whole point of the prisoner's dilemma is that cooperation pays each agent better, conditional on them cooperating. "Overall" is a very nebulous concept, anyway, unless you take the hard utilitarian position and start adding up utils.

If cooperation were that beneficial, unconditional cooperation would have been hardwired in our genes.

Pretty much every situation in real life involves some variant on the prisoner's dilemma

Nope, I strongly disagree. To take a trivial example, Alice doesn't steal Bob's car because she thinks she'll be caught and sent to prison. Alice is NOT "cooperating" with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner's dilemma.

Comment author: OrphanWilde 15 May 2015 08:01:33PM -1 points [-]

Nonsense. Hunter A kills hunter B, takes his wives, his meat, and his cave and lives in it happily thereafter.

"Overall" means "Combining the utility-analog of both parties", not "More utility-analog for a given party". With only one hunter, there are fewer kills/less meat overall, at the least.

Nope, I strongly disagree. To take a trivial example, Alice doesn't steal Bob's car because she thinks she'll be caught and sent to prison. Alice is NOT "cooperating" with Bob, she is reacting to incentives (in this case, threat of imprisonment) which have nothing to do with the prisoner's dilemma.

The incentives are the product of breaking the prisoner's dilemma - the "government altered the payoff matrix" and all that. Etiquette, ethics, and law are all increasing levels of rules, and punishment for those rules, whose core purpose is to alter payoffs for defection; from as subtle as the placement of utensils at a dinner table to prohibit subtle threats to other guests, and less desirable seat placements as punishments for not living up to standards of etiquette, to shooting somebody for escalating a police situation one time too many in an attempt to escape punishment.

Comment author: Lumifer 15 May 2015 08:12:27PM *  1 point [-]

Combining the utility-analog of both parties

I am not a utilitarian. I don't understand how are you going to combine the utils of both parties.

With one hunter less, there are fewer kills but fewer mouths to feed as well.

The incentives are the product of breaking the prisoner's dilemma

If it's broken, it's not a prisoner's dilemma situation any more. If you want to argue that it exists as a counterfactual I'll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.

Comment author: OrphanWilde 15 May 2015 08:23:03PM 0 points [-]

I am not a utilitarian. I don't understand how are you going to combine the utils of both parties.

I'm also not a utilitarian, and at this point you're just quibbling over semantics rather than making any kind of coherent point. Of course you can't combine the utils, that's the -point- of the problem. Arguing that cooperation-defection results in the most gain for the defector is just repeating part of the problem statement of the prisoner's dilemma.

If it's broken, it's not a prisoner's dilemma situation any more. If you want to argue that it exists as a counterfactual I'll agree and point out that a great variety of things (including ravenous pink unicorns with piranha teeth) exist as a counterfactual.

Please, if you would, maintain the context of the conversation taking place. This gets very tedious when I have to repeat everything that was said in every previous comment. http://lesswrong.com/lw/m6b/thoughts_on_minimizing_designer_baby_drama/cdaa <- This is where this chain of conversation began. If this is your response, you're doing nothing but conceding the point in a hostile and argumentative way.

Comment author: Epictetus 15 May 2015 08:11:01PM 1 point [-]

Pretty much every situation in real life involves some variant on the prisoner's dilemma, almost always with etiquette, ethical, or legal prohibitions against defection.

Chicken comes up fairly often and there mutual defection is by far the worst outcome for either party (i.e. if you knew the other guy wanted to defect, you'd cooperate).

Comment author: Lumifer 15 May 2015 08:17:05PM 3 points [-]

In an even simpler case, if you are a business, trying to cooperate instead of "defecting" will get you charged with anti-trust violations.

Comment author: OrphanWilde 15 May 2015 08:31:02PM -1 points [-]

True. But challenging somebody to a Chicken-like game in the first place can be modeled as a Defection in a prisoner's dilemma; you win if they Cooperate and refuse, and both of you are worse off if they also Defect, and agree to the game.

Comment author: Lumifer 15 May 2015 08:47:06PM 2 points [-]

can be modeled as a Defection in a prisoner's dilemma

No, it can not -- in a PD you make your decision not knowing the other party's decision. Here if you challenge, the other party already knows your choice before having to make its own.

Comment author: Dorikka 15 May 2015 09:08:44PM 0 points [-]

So get a reputation for being revengeBot?

Comment author: OrphanWilde 15 May 2015 09:32:03PM -2 points [-]

You've Defected, and they've Cooperated, the moment you issued your challenge, and they didn't. They're now in a disadvantageous position, and you're in an advantageous position; their subsequent Defection is in a different game with altered payoffs, but it also qualifies as a PD. (You could, after all, Cooperate in the subsequent game, and retract your challenge.)

Prisoner's Dilemma is generally iterative in real life.

Comment author: V_V 15 May 2015 06:58:28PM -1 points [-]

Prisoner's dilemma is the simplest idealized form of all scenarios where a group of agents prefer that everyone cooperates with each other rather than everyone defects to each other, but for each individual agent, whatever the other agents do, it has an incentive to defect.

There are other common types of scenarios, of course: in zero-sum scenarios cooperation is not possible: a hunter and their prey can't cooperate to split calories between each other in a way that benefits both.
In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn't reach point B.

These scenarios are trivial from a game-theoretical perspective. The simplest and arguably the most practically relevant scenario where coordination is beneficial but can't be trivially achieved is the prisoner's dilemma.

Comment author: Nornagest 15 May 2015 08:29:18PM *  3 points [-]

Stag hunts (which are not the same as the hunter/prey scenarios discussed elsewhere in this thread) are another theoretically nontrivial category of coordination games with interesting social/behavioral implications -- arguably more than the prisoner's dilemma, though that probably depends on what kind of life you happen to find yourself in. I don't know why they don't get much exposure on LW, but it might have something to do with the fact that they don't have the PD's historical links to AI.

Comment author: V_V 16 May 2015 02:51:56PM *  0 points [-]

I agree that Stag hunt is theoretically and practically interesting, but I would say that it is not as interesting as the Prisoner's dilemma.

In order to "solve" a Stag hunt (in the sense of realizing the Pareto-optimal outcome), all you need is a communication channel between the players, even an one-shot one-way channel suffices.
In a Prisoner's dilemma, communication is not enough, you need either to iterate the game or to modify the payoff matrix.

There are other games that have significant practical applicability, such as Chicken/Volunteer's dilemma and Ultimatum.

but it might have something to do with the fact that they don't have the PD's historical links to AI.

I'm not aware of these links, do you have a reference?

Comment author: Nornagest 18 May 2015 12:33:06AM 0 points [-]

I'm not aware of these links, do you have a reference?

Not offhand, but the PD (specifically, the iterated version) is a classic exercise to motivate prediction and interaction between software agents. I wrote a few in school, though I was better at market simulations. Believe LW ran a PD tournament at some point, too, though I didn't participate in that one.

Comment author: V_V 18 May 2015 12:59:05AM 0 points [-]

Not offhand, but the PD (specifically, the iterated version) is a classic exercise to motivate prediction and interaction between software agents.

I believe it's because it is at the same time very simple to explain and very interesting.

Believe LW ran a PD tournament at some point, too, though I didn't participate in that one.

I think they ran two variations of program-equilibrium PD. I participated in the last one.

Comment author: Lumifer 15 May 2015 07:57:19PM 2 points [-]

I understand that the prisoner's dilemma is interesting and non-trivial from the game-theoretic perspective. That does not contradict my point that it's rare in normal life and that most choices people actually make are not in this framework.

Comment author: OrphanWilde 15 May 2015 07:22:55PM 0 points [-]

In other scenarios, cooperation is trivially the best choice: if Alice and Bob want to move a heavy object from point A to point B and neither is strong enough to move it alone, but they can move with their combined strength, then they have an incentive to cooperate, neither has an incentive to defect since if one of them defects then the heavy object doesn't reach point B.

Unless the object weighs exactly enough that it requires both of their full strength to move, then they both have an incentive to defect (not to put their full effort in, and let the other work harder). Mutual defection then results in the object not reaching point A.

Most scenarios involve some variation. Even the hunter-prey scenario; the herd or hunters could deliberately choose a sacrifice, saving both hunters and prey from running and expending additional calories on all sides, and reducing the number of prey animals, overall, that the hunters would need to eat. (Consider a real-life example of this - human herders, and their herds. Human-herd relationships are more complex than that, but it could be modeled that way.)

Comment author: Good_Burning_Plastic 17 May 2015 09:46:59AM 0 points [-]

Actually some of the disadvantages of being tall would disappear (in the longish run) if everybody was tall. For example, if the average person was 1.90 m, cars would be designed accordingly and wouldn't be as uncomfortable for people 1.90 m tall.