General-Purpose Questions Thread
Similar to the Crazy Ideas Thread and Diaspora Roundup Thread, I thought I'd try making a General-Purpose Questions Thread.
The purpose is to provide a forum for asking questions to the community (appealing to the wisdom of this particular crowd) in things that don't really merit their own thread.
Iterated Gambles and Expected Utility Theory
The Setup
I'm about a third of the way through Stanovich's Decision Making and Rationality in the Modern World. Basically, I've gotten through some of the more basic axioms of decision theory (Dominance, Transitivity, etc).
As I went through the material, I noted that there were a lot of these:
Decision 5. Which of the following options do you prefer (choose one)?
A. A sure gain of $240
B. 25% chance to gain $1,000 and 75% chance to gain nothing
The text goes on to show how most people tend to make irrational choices when confronted with decisions like this; most strikingly was how often irrelevant contexts and framing effected people's decisions.
But I understand the decision theory bit; my question is a little more complicated.
When I was choosing these options myself, I did what I've been taught by the rationalist community to do in situations where I am given nice, concrete numbers: I shut up and I multiplied, and at each decision choose the option with the highest expected utility.
Granted, I equated dollars to utility, which Stanovich does mention that humans don't do well (see Prospect Theory).
The Problem
In the above decision, option B clearly has the higher expected utility, so I chose it. But there was still a nagging doubt in my mind, some part of me that thought, if I was really given this option, in real life, I'd choose A.
So I asked myself: why would I choose A? Is this an emotion that isn't well-calibrated? Am I being risk-averse for gains but risk-taking for losses?
What exactly is going on?
And then I remembered the Prisoner's Dilemma.
A Tangent That Led Me to an Idea
Now, I'll assume that anyone reading this has a basic understanding of the concept, so I'll get straight to the point.
In classical decision theory, the choice to defect (rat the other guy out) is strictly superior to the choice to cooperate (keep your mouth shut). No matter what your partner in crime does, you get a better deal if you defect.
Now, I haven't studied the higher branches of decision theory yet (I have a feeling that Eliezer, for example, would find a way to cooperate and make his partner in crime cooperate as well; after all, rationalists should win.)
Where I've seen the Prisoner's Dilemma resolved is, oddly enough, in Dawkin's The Selfish Gene, which is where I was first introduced to the idea of an Iterated Prisoner's Dilemma.
The interesting idea here is that, if you know you'll be in the Prisoner's Dilemma with the same person multiple times, certain kinds of strategies become available that weren't possible in a single instance of the Dilemma. Partners in crime can be punished for defecting by future defections on your own behalf.
The key idea here is that I might have a different response to the gamble if I knew I could take it again.
The Math
Let's put on our probability hats and actually crunch the numbers:
Format - Probability: $Amount of Money | Probability: $Amount of Money
Assuming one picks A over and over again, or B over and over again.
Iteration A--------------------------------------------------------------------------------------------B
1 $240-----------------------------------------------------------------------------------------1/4: $1,000 | 3/4: $0
2 $480----------------------------------------------------------------------1/16: $2,000 | 6/16: $1,000 | 9/16: $0
3 $720---------------------------------------------------1/64: $3,000 | 9/64: $2,000 | 27/64: $1,000 | 27/64: $0
4 $960------------------------1/256: $4,000 | 12/256: $3,000 | 54/256: $2,000 | 108/256: $1,000 | 81/256: $0
5 $1,200----1/1024: $5,000 | 15/1024: $4,000 | 90/256: $3,000 | 270/1024: $2,000 | 405/1024: $1,000 | 243/1024: $0
And so on. (If I've ma de a mistake, please let me know.)
The Analysis
It is certainly true that, in terms of expected money, option B outperforms option A no matter how many times one takes the gamble, but instead, let's think in terms of anticipated experience - what we actually expect to happen should we take each bet.
The first time we take option B, we note that there is a 75% chance that we walk away disappointed. That is, if one person chooses option A, and four people choose option B, on average three out of those four people will underperform the person who chose option A. And it probably won't come as much consolation to the three losers that the winner won significantly bigger than the person who chose A.
And since nothing unusual ever happens, we should think that, on average, having taken option B, we'd wind up underperforming option A.
Now let's look at further iterations. In the second iteration, we're more likely than not to have nothing having taken option B twice than we are to have anything.
In the third iteration, there's about a 57.8% chance that we'll have outperformed the person who chose option A the whole time, and a 42.2% chance that we'll have nothing.
In the fourth iteration, there's a 73.8% chance that we'll have matched or done worse than the person who has chose option A four times (I'm rounding a bit, $1,000 isn't that much better than $960).
In the fifth iteration, the above percentage drops to 63.3%.
Now, without doing a longer analysis, I can tell that option B will eventually win. That was obvious from the beginning.
But there's still a better than even chance you'll wind up with less, picking option B, than by picking option A. At least for the first five times you take the gamble.
Conclusions
If we act to maximize expected utility, we should choose option B, at least so long as I hold that dollars=utility. And yet it seems that one would have to take option B a fair number of times before it becomes likely that any given person, taking the iterated gamble, will outperform a different person repeatedly taking option A.
In other words, of the 1025 people taking the iterated gamble:
we expect 1 to walk away with $1,200 (from taking option A five times),
we expect 376 to walk away with more than $1,200, casting smug glances at the scaredy-cat who took option A the whole time,
and we expect 648 to walk away muttering to themselves about how the whole thing was rigged, casting dirty glances at the other 377 people.
After all the calculations, I still think that, if this gamble was really offered to me, I'd take option A, unless I knew for a fact that I could retake the gamble quite a few times. How do I interpret this in terms of expected utility?
Am I not really treating dollars as equal to utility, and discounting the marginal utility of the additional thousands of dollars that the 376 win?
What mistakes am I making?
Also, a quick trip to google confirms my intuition that there is plenty of work on iterated decisions; does anyone know a good primer on them?
I'd like to leave you with this:
If you were actually offered this gamble in real life, which option would you take?
Cross-Cultural maps and Asch's Conformity Experiment
So I'm going through the sequences (in AI to Zombies) and I get to the bit about Asch's Conformity Experiment.
It's a good bit of writing, but I mostly pass by without thinking about it too much. I've been taught about the experiment before, and while Eliezer's point of whether or not the subjects were behaving rationally is interesting, it kind of got swallowed up by his discussion of lonely dissent, which I thought was more engaging.
Later, after I'd passed the section on cult attractors and got into the section on letting go, a thought occurred to me, something I'd never actually thought before.
Eliezer notes:
Three-quarters of the subjects in Asch's experiment gave a "conforming" answer at least once. A third of the subjects conformed more than half the time.
That answer is surprising. It was surprising to me the first time I learned about the experiment, and I think it's surprising to just about everyone the first time they hear it. Same thing with a lot of the psychology surrounding heuristics and biases, actually. Forget the Inquisition - no one saw the Stanford Prison Experiment coming.
Here's the thought I had: Why was that result so surprising to me?
I'm not an expert in history, but I know plenty of religious people. I've learned about the USSR and China, about Nazi Germany and Jonestown. I have plenty of available evidence of times where people went along with things they wouldn't have on their own. And not all of them are negative. I've gone to blood drives I probably wouldn't have if my friends weren't going as well.
When I thought about what my prediction would be, had I been asked what percentage of people I thought would dissent before being told, I think I would have guessed that more than 80% of subject would consistently dissent. If not higher.
And yet that isn't what the experiment shows, and it isn't even what history shows. For every dissenter in history, there have to be at least a few thousand conformers. At least. So why did I think dissent was the norm?
So I decide to think about it, and my brain immediately spits out: you're an American in an individualistic culture. Hypothesis: you expect people to conform less because of the culture you live in/were raised in. This begs the question: have their been cross-cultural studies done on Asch's Conformity Experiment? Because if people in China conform more than people in America, then how much people conform probably has something to do with culture.
A little googling brings up a 1996 paper that does a meta-analysis on studies that repeated Asch's experiments, either with a different culture, or at a later date in time. Their findings:
The results of this review can be summarized in three parts.
First, we investigated the impact of a number of potential moderator variables, focusing just on those studies conducted in the United States where we were able to investigate their relationship with conformity, free of any potential interactions with cultural variables. Consistent with previous research, conformity was significantly higher, (a) the larger the size of the majority, (b) the greater the proportion of female respondents, (c) when the majority did not consist of out-group members, and (d) the more ambiguous the stimulus. There was a nonsignificant tendency for conformity to be higher, the more consistent the majority. There was also an unexpected interaction effect: Conformity was higher in the Asch (1952b, 1956) paradigm (as was expected), but only for studies using Asch's (1956) stimulus materials; where other stimulus materials were used (but where the task was also judging which of the three comparison lines was equal to a standard), conformity was higher in the Crutchfield (1955) paradigm. Finally, although we had expected conformity to be lower when the participant's response was not made available to the majority, this variable did not have a significant effect.
The second area of interest was on changes in the level of conformity over time. Again the main focus was on the analysis just using studies conducted in the United States because it is the changing cultural climate of Western societies which has been thought by some to relate to changes in conformity. We found a negative relationship. Levels of conformity in general had steadily declined since Asch's studies in the early 1950s. We did not find any evidence for a curvilinear trend (as, e.g., Larsen, 1982, had hypothesized), and the direction was opposite to that predicted by Lamb and Alsifaki (1980).
The third and major area of interest was in the impact of cultural values on conformity, and specifically differences in individualism-collectivism. Analyses using measures of cultural values derived from Hofstede (1980, 1983), Schwartz (1994), and Trompenaars (1993) revealed significant relationships confirming the general hypothesis that conformity would be higher in collectivist cultures than in individualist cultures. That all three sets of measures gave similar results, despite the differences in the samples and instruments used, provides strong support for the hypothesis. Moreover, the impact of the cultural variables was greater than any other, including those moderator variables such as majority size typically identified as being important factors.
Cultural values, it would seem, are significant mediators of response in group pressure experiments.
So, while the paper isn't definitive, it (and the papers it draws from) show reasonable evidence that there is a cultural impact on how much people conform.
I thought about that for a little while, and then I realized that I hadn't actually answered my own question.
My confusion stems from the disparity between my prediction and reality. I'm not wondering about the effect culture has on conformity (the territory), I'm wondering about the effect culture has on my prediction of conformity (the map).
In other words, do people born and raised in a culture with collectivist values (China, for example) or who actually do conform beyond the norm (people who are in a flying-saucer cult, or the people actually living in a compound) expect people to conform more than I did? Is their map any different from mine?
Think about it - with all the different cult attractors, it probably never feels as though you are vastly conforming, even if you are in a cult. The same can probably be said for any collectivist society. Imagine growing up in the USSR - would you predict that people would conform with any higher percentage than someone born in 21st century America? If you were raised in an extremely religious household, would you predict that people would conform as much as they do? Less? More?
How many times have I agreed with a majority even when I knew they probably weren't right, and never thought of it as "conformity"? It took a long time for my belief in god to finally die, even when I could admit that I just believed that I believed. And why did I keep believing (or keep trying to/saying that I believed)?
Because it's really hard to actually dissent. And I wasn't even lonely.
So why was my map that wrong?
What background process or motivated reasoning or...whatever caused that disparity?
One thing that, I think, contributes, is that I was generalizing from fictional evidence. Batman comes far more readily to my mind than Jonestown. For that matter, Batman comes more readily to my mind than the millions of not-Batmans in Gotham city. I was also probably not being moved by history enough. For every Spartacus, there are at minimum hundreds of not-Spartuses, no matter what the not-Spartacuses say when asked.
But to predict that three-quarters of subjects would conform at least once seems to require a level of pessimism beyond even that. After all, there were no secret police in Asch's experiment; no one had emptied their bank accounts because they thought the world was ending.
Perhaps I'm making a mistake by putting myself into the place of the subject of the experiment. I think I'd dissent, but I would predict that most people think that, and most people conformed at least once. I'm also a reasonably well-educated person, but that didn't seem to help the college students in the experiment.
Has any research been done on people's prediction of their own and other's conformity, particularly across cultures or in groups that are "known" for their conformity (communism, the very religious, etc.)? Do people who are genuine dissenters predict that more people will dissent than people who genuinely conform?
I don't think this is a useless question. If you're starting a business that offers a new solution to a problem where solutions already exist, are you overestimating how many people will dissent and buy your product?
Recommended Reading for Evolution?
I'll make this short and sweet.
I've been reading Dawkin's The Selfish Gene, and it's been really helpful filling in some of the gaps I have in my understanding of how evolution actually works.
The last biology class I took was in high school, and I don't think the mechanics of evolution is covered particularly well in American high schools.
I'm looking for recommendations - has anyone read any books that accurately describe the process of evolution for someone without specialized knowledge of biology? I've already checked LessWrong's recommended textbooks, and while it recommends some books on evolutionary psychology and on animal behavior from an evolutionary perspective, it doesn't appear to have anything that describes evolution itself in sufficient detail to model it.
I'm toying with the idea of trying to program an evolution simulator, and so I need a fairly detailed, accessible account.
Thanks for the help!
A Challenge: Maps We Take For Granted
Imagine that you were instantly transported into (roughly) the 13th century. I'm not great at history, but I'm picturing sometime around the crusades. You're sitting there, reading this post on your computer, and BAM! Some guy in chain mail is asking you if thou art the spawn of a demon.
Given this situation, I present to you a challenge:
You are stranded in the past. You have no modern technology except your everyday clothes. The only thing you do have is your knowledge from the future.
What do you do?
I'll make this a little more structured for the sake of clarity.
1) You appear in Great Britain (or the appropriate analogue for your native culture).
2) Assume the language barrier is surmountable - in other words, it may not be easy, but you can communicate effectively (by learning the language, or simply adapting to an older version of your native tongue).
3) Further assume that you manage to gain the ear of a ruling lord (how is not important, just say you're a wizard or something) and that he provides you with enough money, labor, and expertise (carpenters, smiths, etc.) to build something *so long as you can describe it in enough detail*.
4) You are only allowed to pull from general, scientifically literate knowledge - high school/bachelor's level only.
5) You can't use your knowledge of future events to your advantage, as it requires too expert a grasp of history. Only your knowledge of the way the world actually works is available.
The reason for 4) has to do with the point of the question. I'm trying to figure out the kind of maps that we have today that are considered "general knowledge" - the kinds of things that are so obvious to us we tend to not realize that people in the past didn't know them.
I'll go first.
The germ theory of disease didn't achieve widespread acceptance until the 19th century. In other words, I'm the only person in the past who is quite confident about how diseases are spread. This means that I can offer practical advice about sanitation when dealing with injuries and plagues. I can make sure that people wash their hands before cutting other people up, and after dealing with corpses. I can make sure that cutting instruments are sanitized (they did have alcohol) before use. And so on. This should reduce the number of deaths from disease in the kingdom, and prove my worth to the king.
I'm trying to build a list of things like this - maps of the way the world really is that we take for granted.
Have fun!
What Would You Do If You Only Had Six Months To Live?
Recently, I've been pondering situations in which a person realizes, with (let's say) around 99% confidence, that they are going to die within a set period of time.
The reason for this could be a kind of cancer without any effective treatment, an injury of some kind, or a communicable disease or virus (such as Ebola). More generally, the simple fact that until Harry Potter-Evans-Verres makes the Philosopher's Stone available to us muggles, we're all going to die eventually makes this kind of consideration valuable.
Let's say that you felt ill, and decided to visit the doctor. After the appropriate tests by the appropriate medical professionals, an old man with a kind face tells you that you have brain cancer. It is inoperable (or the operation has less than a 1% success rate) and you are given six months to live. This kindly old doctor adds that he is very sorry, and gives you a prescription for something to deal with the symptoms (at least for a while).
Furthermore, you understand something of probability, and so while you might hope for a miracle, you know better than to count on one. Which means that even if there exists a .0001% chance you'll live for another 50 years, you have to act as though you're only going to live another six months.
What should you do?
The first answer I thought of was, "go skydiving," which is a cheeky shorthand for trying to enjoy your own life as much as you can until you die. Upon reflection, however, that seems like an awfully hedonistic answer, doesn't it? Given this philosophy, you should gorge yourself on donuts, spend your life's savings on expensive cars and prostitutes, and die with a smile on your face.
Something doesn't seem quite right about this approach. For one, it completely ignores things like trying to take care of the people close to you that you're leaving behind, but even if you're a friendless orphan it doesn't make sense to live like that. Dopamine is not happiness, and feeling alive isn't necessarily what life is about. I took a university course centered around Aristotle's Nichomachean Ethics, and one of the examples we used to distinguish a "happy" life from a "well-spent" life was that of the math professor who spends her days counting blades of grass. While counting those blades of grass might make her happiest, she is still wasting her life and potential. Likewise, the person who spends their short remaining months in self-indulgent indolence is wasting a chance to do something - what, I'm not quite sure, but still something worthwhile.
The second answer I thought of seems to be the reasonable one - spend your six months preparing yourself and your loved ones for your inevitable demise. There are things to get in order, funeral arrangements to make, a will to update, and then there's making sure your dependents are taken care of financially. You never thought dying involved so much paperwork! Also, you might consider making peace with whatever beliefs you have about the world (religious or not), and trying to accept the end so you can enjoy what time you have left.
This seems to be the technically correct answer to me - the kind of answer that is consistent with a responsible, considerate individual faced with such a situation. However, much like the ten commandments, the kind of morality that this approach shows seems to be a bare-minimum morality. The kind of morality expressed by "Thou Shalt Not Kill," rather than the kind of over-and-above morality expressed by "Thou Shalt Ensure No One Shall Ever Die Again, Ever" which seems to be popular on LessWrong and in the Effective Altruism community. Or at the very least, seems to be expressed by Mr. Yudkowsky.
So I started wondering - what exactly would someone who judges morality by expected utility and who subscribes to an over-and-above approach do with the knowledge that they were going to die?
There's an old George Carlin joke about death:
But you can entertain and the only reason I suggest you can something to do with the way you die is a little known...and less understood portion of death called..."The Two Minute Warning." Obviously, many of you do not know about it, but just as in football, two minutes before you die, there is an audible warning: "Two minutes, get your **** together" and the only reason we don't know about it is 'cause the only people who hear it...die! And they don't have a chance to explain, you know. I don't think we'd listen anyway.
But there is a two minute warning and I say use those two minutes. Entertain. Uplift. Do something. Give a two minute speech. Everyone has a two minute speech in them. Something you know, something you love. Your vacation, man...two minutes. Really do it well. Lots of feeling, lots of spirit and build- wax eloquent for the first time. Reach a peak. With about five seconds left, tell them, "If this is not the truth, may God strike me dead!' THOOM! From then on, you command much more attention.
As usual with Mr. Carlin's humor, there is a very interesting idea hidden in the humor. Here, the idea is this: There is power in knowing when you will die. Note that this isn't just having nothing left to lose - because people who have nothing left to lose often still have their lives.
My third idea, attempting to synthesize all of this, has to do with self-immolation. The idea of setting yourself on fire as an act of political protest. Please note that I am not recommending that anyone do this (cough, any lawyers listening, cough).
It's just that martyrdom is so much more palatable a concept when you know you're going to die anyway. Instead of waiting for the cancer to kill you, why shouldn't you sell your life for something more valuable? I'm not saying don't make arrangements for your death, because you should, but if you can use your death to galvanize people to action, shouldn't you? In Christopher Nolan's Batman Begins, the deaths of Thomas and Martha Wayne were the catalyst that caused Gotham to rejuvenate itself from the brink of economic collapse. If your death could serve a similar purpose, and you are committed to making the world a better place...
And maybe you don't have to actually commit suicide by criminal (or cop, or fire, etc...) but the risk-reward calculation for any extremely ethical but extremely dangerous activity has changed. You could volunteer to fight Ebola in Africa, knowing that if you catch it, you'll only be dying a few months ahead of schedule. You could try to videotape the atrocities committed by some extremist group and post it on the internet. And so on.
In summary, it seems to me that people don't tend to think about dying as an act, as something you do, instead of as something that happens to you. It's a lot like breathing: generally involuntary, but you still have a say in exactly when it happens. I'm not saying that everyone should martyr themselves for whichever cause they believe in. But if you happen to be told that you're already dying...from the standpoint of expected utility, becoming a martyr makes a lot more sense. Which isn't exactly intuitive, but it's what I've come up with.
Now pretend that the kindly old doctor has shuffled into the room, blinking as he shuffles a few papers. "I'm very sorry," he says, "But you've only got about 70 years to live..."
Guidelines for Upvoting and Downvoting?
I've only recently joined the LessWrong community, and I've been having a blast reading through posts and making the occasional comment. So far, I've received a few karma points, and I’m pretty sure I’m more proud of them than of all the work I did in high school put together.
My question is simple, and aimed a little more towards the veterans of LessWrong:
What are the guidelines for upvoting and downvoting? What makes a comment good, and what makes one bad? Is there somewhere I can go to find this out (I've looked, but there doesn't seem to be a guide on LessWrong already up. On the other hand, I lose my glasses while wearing them, so…)
Additionally, why do I sometimes see discussion posts with many comments but few upvotes, and others with many upvotes but few comments? If a post is worth commenting on, isn't it worth upvoting? I feel as though my map is missing a few pages here.
Not only would having a clear discussion of this help me review the comments of others better, it would also help me understand what I’m being reinforced for on each of my comments, so I can alter my behaviors accordingly.
I want to help keep this a well-kept garden, but I’m struggling to figure out how to trim the hedges.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)