Comment author: Sable 29 May 2016 02:16:38AM 2 points [-]

Thank you for sharing; I agree with your conclusions about education in general.

With regards to having something to protect, I still haven't figured out what mine is, so I can't answer your final question.

I can, however, observe that many important discoveries and business ventures seem to result from two factors:

1) Having a prepared mind (be looking for opportunity, have the wealth/intelligence/influence to leverage the new information).

2) Complete chance.

Observe that Fleming's discovery of Penicillin started with him discovering some mold; Percy Spenser discovered microwave cooking when he was working with microwave emitters and noticed a candy bar melting; Viagra was originally investigated for high blood pressure, until doctors started getting awkward reports from their patients...

The list goes on.

My point is that it seems like an established pattern that "smart people in the right places at the right times noticing things" is a way people find out what they want to do, and it sounds like you experienced a similar situation.

I think this quote applies beyond just science:

The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” (I found it!) but “That’s funny …” — Isaac Asimov

https://www.theguardian.com/science/blog/2012/may/04/oops-invented-rocket-happy-accidents http://quoteinvestigator.com/2015/03/02/eureka-funny/ https://en.wikipedia.org/wiki/Microwave_oven#Discovery https://en.wikipedia.org/wiki/Sildenafil#History

Iterated Gambles and Expected Utility Theory

1 Sable 25 May 2016 09:29PM

The Setup

I'm about a third of the way through Stanovich's Decision Making and Rationality in the Modern World.  Basically, I've gotten through some of the more basic axioms of decision theory (Dominance, Transitivity, etc).

 

As I went through the material, I noted that there were a lot of these:

Decision 5. Which of the following options do you prefer (choose one)?

A. A sure gain of $240

B. 25% chance to gain $1,000 and 75% chance to gain nothing

 

The text goes on to show how most people tend to make irrational choices when confronted with decisions like this; most strikingly was how often irrelevant contexts and framing effected people's decisions.

 

But I understand the decision theory bit; my question is a little more complicated.

 

When I was choosing these options myself, I did what I've been taught by the rationalist community to do in situations where I am given nice, concrete numbers: I shut up and I multiplied, and at each decision choose the option with the highest expected utility.

 

Granted, I equated dollars to utility, which Stanovich does mention that humans don't do well (see Prospect Theory).

 

 

The Problem

In the above decision, option B clearly has the higher expected utility, so I chose it.  But there was still a nagging doubt in my mind, some part of me that thought, if I was really given this option, in real life, I'd choose A.

 

So I asked myself: why would I choose A?  Is this an emotion that isn't well-calibrated?  Am I being risk-averse for gains but risk-taking for losses?

 

What exactly is going on?

 

And then I remembered the Prisoner's Dilemma.

 

 

A Tangent That Led Me to an Idea

Now, I'll assume that anyone reading this has a basic understanding of the concept, so I'll get straight to the point.

 

In classical decision theory, the choice to defect (rat the other guy out) is strictly superior to the choice to cooperate (keep your mouth shut).  No matter what your partner in crime does, you get a better deal if you defect.

 

Now, I haven't studied the higher branches of decision theory yet (I have a feeling that Eliezer, for example, would find a way to cooperate and make his partner in crime cooperate as well; after all, rationalists should win.)

 

Where I've seen the Prisoner's Dilemma resolved is, oddly enough, in Dawkin's The Selfish Gene, which is where I was first introduced to the idea of an Iterated Prisoner's Dilemma.

 

The interesting idea here is that, if you know you'll be in the Prisoner's Dilemma with the same person multiple times, certain kinds of strategies become available that weren't possible in a single instance of the Dilemma.  Partners in crime can be punished for defecting by future defections on your own behalf.

 

The key idea here is that I might have a different response to the gamble if I knew I could take it again.

 

The Math

Let's put on our probability hats and actually crunch the numbers:

Format -  Probability: $Amount of Money | Probability: $Amount of Money

Assuming one picks A over and over again, or B over and over again.

Iteration A--------------------------------------------------------------------------------------------B

1 $240-----------------------------------------------------------------------------------------1/4: $1,000 | 3/4: $0

2 $480----------------------------------------------------------------------1/16: $2,000 | 6/16: $1,000 | 9/16: $0

3 $720---------------------------------------------------1/64: $3,000 | 9/64: $2,000 | 27/64: $1,000 | 27/64: $0

4 $960------------------------1/256: $4,000 | 12/256: $3,000 | 54/256: $2,000 | 108/256: $1,000 | 81/256: $0

5 $1,200----1/1024: $5,000 | 15/1024: $4,000 | 90/256: $3,000 | 270/1024: $2,000 | 405/1024: $1,000 | 243/1024: $0

And so on. (If I've ma de a mistake, please let me know.)

 

The Analysis

It is certainly true that, in terms of expected money, option B outperforms option A no matter how many times one takes the gamble, but instead, let's think in terms of anticipated experience - what we actually expect to happen should we take each bet.

 

The first time we take option B, we note that there is a 75% chance that we walk away disappointed.  That is, if one person chooses option A, and four people choose option B, on average three out of those four people will underperform the person who chose option A.  And it probably won't come as much consolation to the three losers that the winner won significantly bigger than the person who chose A.

 

And since nothing unusual ever happens, we should think that, on average, having taken option B, we'd wind up underperforming option A.

 

Now let's look at further iterations.  In the second iteration, we're more likely than not to have nothing having taken option B twice than we are to have anything.

 

In the third iteration, there's about a 57.8% chance that we'll have outperformed the person who chose option A the whole time, and a 42.2% chance that we'll have nothing.

 

In the fourth iteration, there's a 73.8% chance that we'll have matched or done worse than the person who has chose option A four times (I'm rounding a bit, $1,000 isn't that much better than $960).

 

In the fifth iteration, the above percentage drops to 63.3%.

 

Now, without doing a longer analysis, I can tell that option B will eventually win.  That was obvious from the beginning.

 

But there's still a better than even chance you'll wind up with less, picking option B, than by picking option A.  At least for the first five times you take the gamble.

 

 

Conclusions

If we act to maximize expected utility, we should choose option B, at least so long as I hold that dollars=utility.  And yet it seems that one would have to take option B a fair number of times before it becomes likely that any given person, taking the iterated gamble, will outperform a different person repeatedly taking option A.

 

In other words, of the 1025 people taking the iterated gamble:

we expect 1 to walk away with $1,200 (from taking option A five times),

we expect 376 to walk away with more than $1,200, casting smug glances at the scaredy-cat who took option A the whole time,

and we expect 648 to walk away muttering to themselves about how the whole thing was rigged, casting dirty glances at the other 377 people.

 

After all the calculations, I still think that, if this gamble was really offered to me, I'd take option A, unless I knew for a fact that I could retake the gamble quite a few times.  How do I interpret this in terms of expected utility?

 

Am I not really treating dollars as equal to utility, and discounting the marginal utility of the additional thousands of dollars that the 376 win?

 

What mistakes am I making?

 

Also, a quick trip to google confirms my intuition that there is plenty of work on iterated decisions; does anyone know a good primer on them?

 

I'd like to leave you with this:

 

If you were actually offered this gamble in real life, which option would you take?

Comment author: Lumifer 09 March 2016 06:00:03PM 4 points [-]

I'm not wondering about the effect culture has on conformity (the territory), I'm wondering about the effect culture has on my prediction of conformity (the map). ... Is their map any different from mine?

Notice that their territory is different from yours. Just that would make you expect their map to be different.

One question that you may ask is whether the bias (the difference between the territory and the map) is a function of the territory: do people in collectivist cultures mis-estimate the prevalent conformity in a different way from people in individualist cultures?

I don't think this is a useless question.

It is not. Consider, for example, one of the issues in political studies: why repressive regimes which present a solid and impenetrable facade tend to collapse very rapidly when the first cracks in the facade appear? One of the answers is that it's a consequence of available information: a lot of people might be very unhappy with the regime but as long as they believe that they are a powerless minority they will hide and do nothing. The first cracks basically tell these people "you're not alone, there are many of you*, and the regime collapse follows soon thereafter.

Note the parallels to estimating the conformity of other people.

Comment author: Sable 10 March 2016 02:05:12PM 1 point [-]

One question that you may ask is whether the bias (the difference between the territory and the map) is a function of the territory: do people in collectivist cultures mis-estimate the prevalent conformity in a different way from people in individualist cultures?

Thank you for putting that so clearly.

Comment author: Protagoras 09 March 2016 04:29:29AM 0 points [-]

The research indicates that most people's responses to any social science result is "that's what I would have expected," although that doesn't actually seem to be true; you can get them to say they expected conflicting results. Have there really been no studies of when people say they think studies are surprising, comparing the results to what people actually predicted beforehand (I know Milgram informally surveyed what people expected before his study, but I don't think he did any rigorous analysis of expectations)? Perhaps people are as inaccurate in reporting what they find surprising as they are in reporting what they expected. It would certainly be interesting to know!

Comment author: Sable 10 March 2016 02:02:22PM *  1 point [-]

There are studies on hindsight bias, which is what I think you're talking about.

In 1983, researcher Daphna Baratz asked undergraduates to read 16 pairs of statements describing psychological findings and their opposites; they were told to evaluate how likely they would have been to predict each finding. So, for example, they read: “People who go to church regularly tend to have more children than people who go to church infrequently.” They also read, “People who go to church infrequently tend to have more children than people who go to church regularly.” Whether rating the truth or its opposite, most students said the supposed finding was what they would have predicted.

From her dissertation.

(I couldn't find a pdf of the dissertation, but that's its page on worldcat).

As for your specific question:

Have there really been no studies of when people say they think studies are surprising, comparing the results to what people actually predicted beforehand

I have no idea, but I want them.

Comment author: Torchlight_Crimson 10 March 2016 03:11:46AM 2 points [-]

Do people who are genuine dissenters predict that more people will dissent than people who genuinely conform?

Genuine dissenters generally predict that most people will conform, largely because it's a lot easier to notice people conforming when you disagree with the thing they're conforming to.

Comment author: Sable 10 March 2016 01:58:04PM 2 points [-]

Is there any evidence to support this in general?

Also, a dissenter in one area (religion, for example) might be a conformer in another. I think it's worth looking at whether someone who actively protests racial discrimination (in a non-conforming way, so maybe someone from the early civil rights movement) would dissent in Asch's experiment. Does willingness to dissent in one area of your life transfer over to a larger willingness to dissent in other areas of your life?

Comment author: bbleeker 10 March 2016 12:46:34PM 1 point [-]

I think it probably matters a lot what people are conforming about. If it's about perception (which line is the same, which color is different) and several people all say the same thing that's different from what I thought I saw, I can see myself starting to doubt my perception. If it's about memory (what is the capital of Rumania?) I'd start thinking I must have misremembered. But if 4 people all said that 2+2=5, I'd realise the experiment wasn't about what they said it was.

Comment author: Sable 10 March 2016 01:12:02PM *  1 point [-]

Baring a fault in our visual cortex or optical systems - an optical illusion, in other words - how is determining that Black is Black or that two lines are the same length any different from mathematical statements? There's a bit in the sequences on why 2+2=4 isn't exactly an unconditional truth. The thought processes that go into both include checking your perceptions, checking your memory, and checking reality.

Maybe 2+2=4 is too simple an example, though; it would be downright Orwellian to stand in a room and listen to a group of people declare that 2+2=5. On the other hand, imagine standing in a room with a bunch of people claiming that there aren't an infinite amount of prime numbers - it might be easier to doubt your own perceptions.

Anyone else want to weigh in on this? Does Asch's methodology effect conformity?

In response to Purposeful Anti-Rush
Comment author: Sable 09 March 2016 12:46:58AM 2 points [-]

In the American Military, they have a saying when dealing with firearms:

Slow is smooth, and smooth is fast.

Cross-Cultural maps and Asch's Conformity Experiment

6 Sable 09 March 2016 12:40AM

So I'm going through the sequences (in AI to Zombies) and I get to the bit about Asch's Conformity Experiment.

 

It's a good bit of writing, but I mostly pass by without thinking about it too much.  I've been taught about the experiment before, and while Eliezer's point of whether or not the subjects were behaving rationally is interesting, it kind of got swallowed up by his discussion of lonely dissent, which I thought was more engaging.

 

Later, after I'd passed the section on cult attractors and got into the section on letting go, a thought occurred to me, something I'd never actually thought before.

 

Eliezer notes:

 

Three-quarters of the subjects in Asch's experiment gave a "conforming" answer at least once.  A third of the subjects conformed more than half the time.

 

That answer is surprising.  It was surprising to me the first time I learned about the experiment, and I think it's surprising to just about everyone the first time they hear it.  Same thing with a lot of the psychology surrounding heuristics and biases, actually.  Forget the Inquisition - no one saw the Stanford Prison Experiment coming.

 

Here's the thought I had:  Why was that result so surprising to me?

 

I'm not an expert in history, but I know plenty of religious people.  I've learned about the USSR and China, about Nazi Germany and Jonestown.  I have plenty of available evidence of times where people went along with things they wouldn't have on their own.  And not all of them are negative.  I've gone to blood drives I probably wouldn't have if my friends weren't going as well.

 

When I thought about what my prediction would be, had I been asked what percentage of people I thought would dissent before being told, I think I would have guessed that more than 80% of subject would consistently dissent.  If not higher.

 

And yet that isn't what the experiment shows, and it isn't even what history shows.  For every dissenter in history, there have to be at least a few thousand conformers.  At least.  So why did I think dissent was the norm?

 

I notice that I am confused.

 

So I decide to think about it, and my brain immediately spits out: you're an American in an individualistic culture.  Hypothesis: you expect people to conform less because of the culture you live in/were raised in.  This begs the question: have their been cross-cultural studies done on Asch's Conformity Experiment?  Because if people in China conform more than people in America, then how much people conform probably has something to do with culture.

 

A little googling brings up a 1996 paper that does a meta-analysis on studies that repeated Asch's experiments, either with a different culture, or at a later date in time.  Their findings:

 

The results of this review can be summarized in three parts.

First, we investigated the impact of a number of potential moderator variables, focusing just on those studies conducted in the United States where we were able to investigate their relationship with conformity, free of any potential interactions with cultural variables. Consistent with previous research, conformity was significantly higher, (a) the larger the size of the majority, (b) the greater the proportion of female respondents, (c) when the majority did not consist of out-group members, and (d) the more ambiguous the stimulus. There was a nonsignificant tendency for conformity to be higher, the more consistent the majority. There was also an unexpected interaction effect: Conformity was higher in the Asch (1952b, 1956) paradigm (as was expected), but only for studies using Asch's (1956) stimulus materials; where other stimulus materials were used (but where the task was also judging which of the three comparison lines was equal to a standard), conformity was higher in the Crutchfield (1955) paradigm. Finally, although we had expected conformity to be lower when the participant's response was not made available to the majority, this variable did not have a significant effect.

The second area of interest was on changes in the level of conformity over time. Again the main focus was on the analysis just using studies conducted in the United States because it is the changing cultural climate of Western societies which has been thought by some to relate to changes in conformity. We found a negative relationship. Levels of conformity in general had steadily declined since Asch's studies in the early 1950s. We did not find any evidence for a curvilinear trend (as, e.g., Larsen, 1982, had hypothesized), and the direction was opposite to that predicted by Lamb and Alsifaki (1980).

The third and major area of interest was in the impact of cultural values on conformity, and specifically differences in individualism-collectivism. Analyses using measures of cultural values derived from Hofstede (1980, 1983), Schwartz (1994), and Trompenaars (1993) revealed significant relationships confirming the general hypothesis that conformity would be higher in collectivist cultures than in individualist cultures. That all three sets of measures gave similar results, despite the differences in the samples and instruments used, provides strong support for the hypothesis. Moreover, the impact of the cultural variables was greater than any other, including those moderator variables such as majority size typically identified as being important factors.

Cultural values, it would seem, are significant mediators of response in group pressure experiments.

 

So, while the paper isn't definitive, it (and the papers it draws from) show reasonable evidence that there is a cultural impact on how much people conform.

 

I thought about that for a little while, and then I realized that I hadn't actually answered my own question.

 

My confusion stems from the disparity between my prediction and reality.  I'm not wondering about the effect culture has on conformity (the territory), I'm wondering about the effect culture has on my prediction of conformity (the map).

 

In other words, do people born and raised in a culture with collectivist values (China, for example) or who actually do conform beyond the norm (people who are in a flying-saucer cult, or the people actually living in a compound) expect people to conform more than I did?  Is their map any different from mine?

 

Think about it - with all the different cult attractors, it probably never feels as though you are vastly conforming, even if you are in a cult.  The same can probably be said for any collectivist society.  Imagine growing up in the USSR - would you predict that people would conform with any higher percentage than someone born in 21st century America?  If you were raised in an extremely religious household, would you predict that people would conform as much as they do?  Less?  More?

 

How many times have I agreed with a majority even when I knew they probably weren't right, and never thought of it as "conformity"?  It took a long time for my belief in god to finally die, even when I could admit that I just believed that I believed.  And why did I keep believing (or keep trying to/saying that I believed)?

 

Because it's really hard to actually dissent.  And I wasn't even lonely.

 

So why was my map that wrong?

 

What background process or motivated reasoning or...whatever caused that disparity?

 

One thing that, I think, contributes, is that I was generalizing from fictional evidence.  Batman comes far more readily to my mind than Jonestown.  For that matter, Batman comes more readily to my mind than the millions of not-Batmans in Gotham city.  I was also probably not being moved by history enough.  For every Spartacus, there are at minimum hundreds of not-Spartuses, no matter what the not-Spartacuses say when asked.

 

But to predict that three-quarters of subjects would conform at least once seems to require a level of pessimism beyond even that.  After all, there were no secret police in Asch's experiment; no one had emptied their bank accounts because they thought the world was ending.

 

Perhaps I'm making a mistake by putting myself into the place of the subject of the experiment.  I think I'd dissent, but I would predict that most people think that, and most people conformed at least once.  I'm also a reasonably well-educated person, but that didn't seem to help the college students in the experiment.

 

Has any research been done on people's prediction of their own and other's conformity, particularly across cultures or in groups that are "known" for their conformity (communism, the very religious, etc.)?  Do people who are genuine dissenters predict that more people will dissent than people who genuinely conform?

 

I don't think this is a useless question.  If you're starting a business that offers a new solution to a problem where solutions already exist, are you overestimating how many people will dissent and buy your product?

Comment author: AlexSchell 15 July 2015 08:47:39PM 5 points [-]

John Maynard Smith's Evolutionary Genetics is a classic textbook. The second edition has simulation/programming exercises after every chapter. Have fun :)

Comment author: Sable 15 July 2015 09:00:34PM 0 points [-]

I'm looking it up on Amazon now. Thanks.

Comment author: ChristianKl 15 July 2015 06:53:02PM 2 points [-]

What do you actually want to know about evolution? How much genetics do you know?

Comment author: Sable 15 July 2015 08:25:04PM 1 point [-]

I'll try to summarize:

1) I want to know enough about the low-level mechanics of gene transfer to be able to model it accurately enough (not necessarily for a scientific paper) with mathematics. This has to have been done before - links to how would be appreciated, or I could start from scratch.

2) I want to know enough about how it works on the macro level to simulate that too, perhaps with the lower level mechanics working behind the scenes.

3) I am very interested in how evolution started - Dawkins references a soup of chemicals, and then the creation of the first replicator mainly by chance over a very long period of time. Is that accurate?

How did evolution work in the beginning? Dawkins mentioned that there were other explanations than the one he gave - what are they? How do I find them?

My training is in engineering/programming, and my genetics knowledge doesn't much exceed anything taught at the high school level. I am, however, prepared to read college-level textbooks on the subject.

Thanks.

View more: Prev | Next