Sleepwalk bias, self-defeating predictions and existential risk
Connected to: The Argument from Crisis and Pessimism Bias
When we predict the future, we often seem to underestimate the degree to which people will act to avoid adverse outcomes. Examples include Marx's prediction that the ruling classes would fail to act to avert a bloody revolution, predictions of environmental disasters and resource constraints, y2K, etc. In most or all of these cases, there could have been a catastrophe, if people had not acted with determination and ingenuity to prevent it. But when pressed, people often do that, and it seems that we often fail to take that into account when making predictions. In other words: too often we postulate that people will sleepwalk into a disaster. Call this sleepwalk bias.
What are the causes of sleepwalk bias? I think there are two primary causes:
Cognitive constraints. It is easier to just extrapolate existing trends than to engage in complicated reasoning about how people will act to prevent those trends from continuing.
Predictions as warnings. We often fail to distinguish between predictions in the pure sense (what I would bet will happen) and what we may term warnings (what we think will happen, unless appropriate action is taken). Some of these predictions could perhaps be interpreted as warnings - in which case, they were not as bad as they seemed.
However, you could also argue that they were actual predictions, and that they were more effective because they were predictions, rather than warnings. For, more often than not, there will of course be lots of work to reduce the risk of disaster, which will reduce the risk. This means that a warning saying that "if no action is taken, there will be a disaster" is not necessarily very effective as a way to change behaviour - since we know for a fact that action will be taken. A prediction that there is a high probability of a disaster all things considered is much more effective. Indeed, the fact that predictions are more effective than warnings might be the reason why people predict disasters, rather than warn about them. Such predictions are self-defeating - which you may argue is why people make them.
In practice, I think people often fail to distinguish between pure predictions and warnings. They slide between these interpretations. In any case, the effect of all this is for these "prediction-warnings" to seem too pessimistic qua pure predictions.
The upshot for existential risk is that those suffering from sleepwalk bias may be too pessimistic. They fail to appreciate the enormous efforts people will make to avoid an existential disaster.
Is sleepwalk bias common among the existential risk community? If so, that would be a pro tanto-reason to be somewhat less worried about existential risk. Since it seems to be a common bias, it would be unsurprising if the existential risk community also suffered from it. On the other hand, they have thought about these issues a lot, and may have been able to overcome it (or even overcorrect for it)
Also, even if sleepwalk bias does indeed affect existential risk predictions, it would be dangerous to let this notion make us decrease our efforts to reduce existential risk, given the enormous stakes, and the present neglect of existential risk. If pessimistic predictions may be self-defeating, so may optimistic predictions.
[Added 24/4 2016] Under which circumstances can we expect actors to sleepwalk? And under what circumstances can we expect that people will expect them to sleepwalk, even though they won't? Here are some considerations, inspired by the comments below. Sleepwalking is presumably more likely if:
- The catastrophe is arriving too fast for actors to react.
- It is unclear whether the catastrophe will in fact occur, or it is at least not very observable for the relevant actors (the financial crisis, possibly AGI).
- The possible disaster, though observable in some sense, is not sufficiently salient (especially to voters) to override more immediate concerns (climate change).
- There are conflicts (World War I) and/or free-riding problems (climate change) which are hard to overcome.
- The problem is technically harder than initially thought.
1, 2 and, in a way, 3, have to do with observing the disaster in time to act, whereas 4 and 5 have to do with ability to act once the problem is identified.
On the second question, my guess would be that people in general do not differentiate sufficiently between scenarios where sleepwalking is plausible and those where it is not (i.e. predicted sleepwalking has less variance than actual sleepwalking). This means that we sometimes probably underestimate the amount of sleepwalking, but more often, if my main argument is right, we overestimate it. An upshot of this is that it is important to try to carefully model the amount of sleepwalking that there will be regarding different existential risks.
[Video] The Essential Strategies To Debiasing From Academic Rationality
Thinking like a Scientist
Biases and Fallacies Game Cards
On the Stupid Questions Thread I asked
I need some list of biases for a game of Biased Pandemic for our Meet-Up. Do suitably prepared/formatted lists exist somewhere?
But none came forward.
Therefore I created a simple deck based on Wikipedia entries. I selected those that can be presumably be used easily in a game, summarized the description and added an illustrative quote.
The deck can be found in Dropbox here (PDF and ODT).
I'd be happy for corrections and further suggestions.
ADDED: We used these cards during the LW Hamburg Meetup. They attracted significant interest and even though we did use them during a board game we drew them and tried to act them out during a discussion round (which didn't work out that well but stimulated discussion nonetheless).
Cognitive Bias Mnemonics
How many cognitive biases can you name, off the top of your head?
Try it, before moving on.
Give yourself sixty seconds.
Make a list.
Write them down.
I know that I've read about a number of biases by now, but they don't come to mind very easily. If I wish to become wary enough to spot cognitive biases in my own thought, then I might appreciate being able to quickly summon many examples of cognitive biases to mind. This would also make it easier to share examples of cognitive biases with others.
I plan to create a set of mnemonics for important biases, to make it easier for myself to remember them (and, as a consequence, to make it easier to spot them and eliminate them). I'll imagine each bias as an item; by visualizing the collection of items, I can remember the biases. If I really want to make sure that I don't forget any, they could be placed along a path in a mind palace.
Example mnemonic: Hindsight bias is an old leather boot. It's an old leather boot because that reminds me of the past, which clues the name of the bias. And anyways, psshh, why is everyone so excited about the idea of footwear? Anyone could have come up with that! It's just like clothes, but for feet! I could have invented it myself, it's so obvious! Hindsight bias: it could happen to you.
Using various lists of cognitive biases, I'm going to be performing this exercise myself and making mnemonics to remember them by. I might post these at some point, but if you're interested in the outcome, I recommend trying to make mnemonics for yourself first -- the associations will be more meaningful to you, personally, that way.
But beware that conceptualizing a bias as a mnemonic might not be perfect, just like conceptualizing biases as named ideas might not be perfect -- more on that here.
For the comments: What witty mnemonics can you come up with?
Are Cognitive Biases Design Flaws?
I am a newbie so today I read the article by Eliezer Yudkowski "Your Strength As A Rationalist" which helped me understand the focus of LessWrong, but I respectfully disagreed with a line that is written in the last paragraph:
It is a design flaw in human cognition...
So this was my comment in the article's comment section which I bring here for discussion:
Since I think evolution makes us quite fit to our current environment I don't think cognitive biases are design flaws, in the above example you imply that even if you had the information available to guess the truth, your guess was another one and it was false, therefore you experienced a flaw in your cognition.
My hypotheses is that reaching the truth or communicating it in the IRC may have not been the end objective of your cognitive process, in this case just to dismiss the issue as something that was not important anyway "so move on and stop wasting resources in this discussion" was maybe the "biological" objective and as such it should be correct, not a flaw.
If the above is true then all cognitive bias, simplistic heuristics, fallacies, and dark arts are good since we have conducted our lives for 200,000 years according to these and we are alive and kicking.
Rationality and our search to be LessWrong, which I support, may be tools we are developing to evolve in our competitive ability within our species, but not a "correction" of something that is wrong in our design.
Edit 1: I realize there is change in the environment and that may make some of our cognitive biases, which were useful in the past, to be obsolete. If the word "flaw" is also applicable to describe something that is obsolete then I was wrong above. If not, I prefer the word obsolete to characterize cognitive biases that are no longer functional for our preservation.
Three methods of attaining change
Say that you want to change some social or political institution: the educational system, the monetary system, research on AGI safety, or what not. When trying to reach this goal, you may use one of the following broad strategies (or some combination of them):
1) You may directly try to lobby (i.e. influence) politicians to implement this change, or try to influence voters to vote for parties that promise to implement these changes.
2) You may try to build an alternative system and hope that it eventually becomes so popular so that it replaces the existing system.
3) You may try to develop tools that a) appeal to users of existing systems and b) whose widespread use is bound to change those existing systems.
Let me give some examples of what I mean. Trying to persuade politicians that we should replace conventional currencies by a private currency or, for that matter, starting a pro-Bitcoin party, fall under 1), whereas starting a private currency and hope that it spreads falls under 2). (This post was inspired by a great comment by Gunnar Zarncke on precisely this topic. I take it that he was there talking of strategy 2.) Similarly, trying to lobby politicians to reform the academia falls under 1) whereas starting new research institutions which use new and hopefully more effective methods falls under 2). I take it that this is what, e.g. Leverage Research is trying to do, in part. Similarly, libertarians who vote for Ron Paul are taking the first course, while at least one possible motivation for the Seasteading Institute is to construct an alternative system that proves to be more efficient than existing governments.
Efficient Voting Advice Applications (VAA's), which advice you to vote on the basis of your views on different policy matters, can be an example of 3) (they are discussed here). Suppose that voters started to use them on a grand scale. This could potentially force politicians to adhere very closely to the views of the voters on each particular issue, since if you failed to do this you would stand little chance of winning. This may or may not be a good thing, but the point is that it would be a change that would not be caused by lobbying of politicians or by building an alternative system, but simply by constructing a tool whose widespread use could change the existing system.
Another similar tool is reputation or user review systems. Suppose that you're dissatisfied with the general standards of some institution: say university education, medical care, or what not. You may attain this by lobbying politicians to implement new regulations intended to ensure quality (1), or by starting your own, superior, universities or hospitals (2), hoping that others will follow. Another method is, however, to create a reliable reputation/review system which, if they became widely used, would guide students and patients to the best universities and hospitals, thereby incentivizing to improve.
Now of course, when you're trying to get people to use such review systems, you are, in effect, building an evaluation system that competes with existing systems (e.g. the Guardian university ranking), so on one level you are using the second strategy. Your ultimate goal is, however, to create better universities, to which better evaluation systems, is just a means (as a tool). Hence you're following the third strategy here, in my terms.
Strategy 1) is of course a "statist" one, since what you're doing here is that you're trying to get the government to change the institution in question for you. Strategies 2) and 3) are, in contrast, both "non-statist", since when you use them you're not directly trying to implement the change through the political system. Hence libertarians and other anti-statists should prefer them.
My hunch is that when people are trying to change things, many of them unthinkingly go for 1), even regarding issues where it is unlikely that they are going to succeed that way. (For instance, it seems to me that advocates for direct democracy who try to persuade voters to vote for direct democratic parties are unlikely to succeed, but that widespread of VAA's might get us considerably closer to their ideal, and that they therefore should opt for the third strategy.) A plausible explanation of this is availability bias; our tendency to focus on what we most often see around us. Attempts to change social institutions through politics get a lot of attention, which makes people think of this strategy first. Even though this strategy is often efficient, I'd guess it is, for this reason, generally overused and that people sometimes instead should go for 2) or 3). (Possibly, Europeans have an even stronger bias in favour of this strategy than Americans.)
I also suspect, though, that people go for 2) a bit too often relative to 3). I think that people find it appealing, for its own sake, to create an entirely alternatively structure. If you're a perfectionist, it might be satisfying to see what you consider "the perfect institution", even if it is very small and has little impact on society. Also, sometimes small groups of devotees flock to these alternatives, and a strong group identity is therefore created. Moreover, I think that availability bias may play a role here, also. Even though this sort of strategy gets less attention than lobbying, most people know what it is. It is quite clear what it means to do something like this, and being part of a project like this therefore gives you a clear identity. For these reasons, I think that we might sometimes fool ourselves into believing that these alternative structures are more likely to be succesful than they actually are.
Conversely, people might be biased against the third strategy because it's less obvious. Also, it has perhaps something vaguely manipulative over it which might bias idealistic people against it. What you're typically trying to do is to get people to use a tool (say VAA's) a side-effect of which is the change you wish to attain (in this case, correspondence between voters' views and actual policies). I don't think that this kind of manipulation is necessarily vicious (but it would need to be discussed on a case-by-case-basis) but the point is that people tend to think that it is. Also, even those who don't think that it is manipulative in an unethical sense would still think that it is somehow "unheroic". Starting your own environmental party or creating your own artifical libertarian island clearly has something heroic over it, but developing efficient VAA's, which as a side-effect changes the political landscape, does not.
I'd thus argue that people should start looking more closely at the third strategy. A group that does use a strategy similar to this is of course for-profit companies. They try to analyze what products would appeal to people, and in so doing, carefully consider how existing institutions shape people's preferences. For instance, companies like Uber, AirBnB and LinkedIn have been succesful because they realized that given the structure of the taxi, the hotel and the recruitment businesses, their products would be appealing.
Of course, these companies primary goal, profit, is very different from the political goals I'm talking about here. At the same time, I think it is useful to compare the two cases. I think that generally, when we're trying to attain political change, we're not "actually trying" (in CFAR's terminology) as hard as we do when we're trying to maximize profit . It is very easy to fall into a mode where you're focusing on making symbolic gestures (which express your identity) rather than on trying to change things in politics. (This is, in effect, what many traditional charities are doing, if the EA movement is right.)
Instead, we should think as hard as profit-maximizing companies what new tools are likely to catch on. Any kind of tools could in principle be used, but the ones that seem most obvious are various kind of social media and other internet based tools (such as those mentioned in this post). The technical progress gives us enormous opportunities to costruct new tools that could re-shape people's behaviour in a way that would impact existing social and political institutions on a large scale.
Developing such tools is not easy. Even very succesful companies again and again fail to predict what new products will appeal to people. Not the least, you need a profound understanding of human psychology in order to succeed. That said, political organizations have certain advantages visavi for-profit companies. More often than not, they might develop ideas publically, whereas for-profit companies often have to keep them secret until they product is launched. This facilitates wisdom of the crowd-reasoning, where many different kinds of people come up with solutions together. Such methods can, in my opinion, be very powerful.
Any input regarding, e.g. the taxonomy of methods, my speculations about biases, and, in particular, examples of institution changing tools are welcome. I'm also interested in comments on efficient methods for coming up with useful tools (e.g. tests of them). Finally, if anything's unclear I'd be happy to provide clarifications (it's a very complex topic).
LW Australia's online hangout results, (short stories about cognitive biases)
Cognitive Biases due to a Narcissistic Parent, Illustrated by HPMOR Quotations
A pattern of cognitive biases not yet discussed here are the biases due to having a narcissistic parent who seeks validation through the child’s academic achievements.
HPMOR clearly shows these biases: Harry's mother is narcissistic, impressed by education, and not particularly smart, and Harry does not realize how this affects his thinking.
Here is my evidence:
The Sorting Hat says Harry is driven by "the fear of losing your fantasy of greatness, of disappointing the people who believe in you" (Ch. 77). Psychology texts say that this fear is what children of a narcissistic parent usually feel. The child feels perpetually ignored because the narcissistic parent seeks validation from the child's accomplishments but refuses to actually listen to the child, spurring the child to ever greater heights of intellectual achievement.
The text supports this view: “Always Harry had been encouraged to study whatever caught his attention, bought all the books that caught his fancy...given anything reasonable that he wanted, except, maybe, the slightest shred of respect” and “Petunia wrung her hands. She seemed to be on the verge of tears. "My love, I know I can't win arguments with you, but please, you have to trust me on this … I want my husband to, to listen to his wife who loves him, and trust her just this once - " (Ch. 1) describes a narcissistic, anxiously needy mother, an avoidant father, and a son whose parents provide for his physical needs but neglect his need for respect (ego). “If you conceived of yourself as a Good Parent, you would do it. But take a ten-year-old seriously? Hardly.” (Ch. 1)
Harry goes Dark when the connection to his family is threatened. For example: "The black rage began to drain away, as it dawned on him that...his family wasn't in danger [of legal separation]" (ch. 5) indicates that Harry went Dark even though no one’s life was threatened. The cost of Harry’s Dark Side is becoming an adult at a young age: Harry says, “Every time I call on it... it uses up my childhood.” (Ch. 91). This is consistent with spending nearly all free time studying (instead of wasting time with friends) to impress Harry’s mother.
Typically, children of narcissistic parents inherit either narcissistic or people-pleasing traits. I predicted that if my theory is correct then Harry would have a narcissistic personality. To test this, I found a list of personality traits that describe a narcissist (by Googling “children of narcissistic parents” and clicking the first link), and compared with Harry’s personality as described in HPMOR. I got a 100% match. Questions and answers are as follows:
1. Grandiose sense of self-importance? Check. Harry plans to “optimize” the entire Universe, expects to “do something really revolutionary and important” (Ch. 7), and is trying to “hurry up and become God” (Ch. 27).
2. Obsessed with himself? Check. He appears to only care about people who are smarter or more powerful than him -- people who can help him. He also has contempt for most students and their interests (Quidditch, etc.)
3. Goals are selfish? Check. Harry claims to want to save everyone, but he believes the best way to help others is to increase his own power most quickly. I address two possible objections below:
Harry’s involvement in the Azkaban breakout was selfish, because Harry could not risk losing Quirrell’s friendship: “ It was a bond that went beyond anything of debts owed, or even anything of personal liking, that the two of them were alone in the wizarding world” (Ch. 51). This, again, mirrors a child’s relationship with a narcissistic mother: the child cannot risk losing the mother’s protection. Harry also had selfish reasons for hearing Quirrell’s plan: “There was no advantage to be gained from not hearing it. And if it did reveal something wrong with Professor Quirrell, then it was very much to Harry's advantage to know it, even if he had promised not to tell anyone.” (Ch. 49)
Harry’s efforts to save Hermione are also selfish because Harry sees Hermione in the same way he sees his mother -- weak in many ways and bound by emotions and convention, but someone Harry must impress and protect. Harry’s statement that “it’s disrespectful to her, to think someone could only like her in that way” (ch. 91) makes sense because Harry is disgusted by the Oedipal implications. If Harry’s mother was not narcissistic, then Harry would not have worked so hard to impress Hermione and would have been less disgusted by the thought of being sexually attracted to her.
4. Troubles with normal relationships? Check. Harry is playing high-stakes mind games with the people he is closest to (Quirrell, Draco, Hermione, Dumbeldore), which is not normal friend behavior. Harry has contempt for nearly everyone else.
5. Becomes furious if criticized? Check. When Snape mocked Harry in Potions class, Harry tried to destroy Snape’s career. Quirrell explained, “When it looked like you might lose, you unsheathed your claws, heedless of the danger. You escalated, and then you escalated again” (Ch. 19).
6. Has fantasies of unbound success, power, intelligence, etc.? Check. Harry wants to conquer the entire Universe with the power of his intelligence, and has plans for how to fill an eternity, including to “...meet up with everyone else who was born on Old Earth to watch the Sun finally go out…” (Ch. 39).
7. Believes that he is special and should only be around other high-status people? Check. Harry avoids average students when possible, and certainly does not hang out with them for fun. “Note to self: The 75th percentile of Hogwarts students a.k.a. Ravenclaw House is not the world's most exclusive program for gifted children” (Ch. 12).
Harry’s association with the (presumably non-special) students in his army is not an exception because minimal text is devoted to Harry instructing them, while much text explains how powerful and high-status the students in the army have become. For Harry, it appears that the army is a tool to use and an opportunity to show off, not an opportunity to give back and help friends improve their skills for their own sake.
8. Requires extreme admiration for everything? Check. Harry takes anything less than admiration for his brilliance as an insult, and responds by striving for new levels of intellectual achievement and arrogance, until the others recognize his dominance. “And I bit a math teacher when she wouldn't accept my dominance” (Ch. 20). Quirrell’s lesson on how to lose described how to avoid making powerful enemies, not how to empathize and care for others -- the insatiable need for admiration is merely delayed and repressed, not corrected.
9. Feels entitled - has unreasonable expectations of special treatment? Check. Harry requires subservience from the school administration, and special magic items such as the time-turner. “McGonagall said, "but I do have a very special something else to give you. I see that I have greatly wronged you in my thoughts, Mr. Potter...this is an item which is ordinarily lent only to children who have already shown themselves to be highly responsible” (Ch. 14).
10. Takes advantage of others to further his own need? Check. Harry justifies his actions toward Draco by saying "I only used you in ways that made you stronger. That's what it means to be used by a friend." (Ch. 97)
11. Does not recognize the feelings of others? Check. One example is Harry not realizing how Neville felt about the prank on the train to Hogwarts. Another is Harry’s remarkably clueless question to Hermione, “Er, can I take it from this that you have been through puberty?" (Ch. 87) Harry has not learned empathy yet: “Harry flinched a little himself. Somewhere along the line he needed to pick up the knack of not phrasing things to hit as hard as he possibly could” (Ch. 86).
12. Envious or believes they are envied? Check. Quirrell said to Harry, “You have everything now that I wanted then. All that I know of human nature says that I should hate you. And yet I do not. It is a very strange thing.” (Ch. 74)
13. Behaves arrogantly? Check. “Minerva's body swayed with the force of that blow, with the sheer raw lese majeste. Even Severus looked shocked.” (Ch. 19) I can’t think offhand of a single instance when Harry is not arrogant.
Therefore, I conclude that Harry and Harry’s mother are both narcissistic. If you want further reading on this topic, look up "The Drama of the Gifted Child" by Dr. Alice Miller (Google for the .pdf) for a more detailed description of a child’s typical relationship with a narcissistic parent.
I am sharing this because it reveals a pattern of cognitive biases that many people (like me) who enjoyed HPMOR, and their parents, probably have. Specifically, there is a strong bias toward either narcissistic or people-pleasing habits, and a difficulty with recognizing and following one’s own desires (because the Universe, unlike a parent, never tells people what to do). One possible reason for studying science is to defend against a parent’s emotional neediness and refusal to provide ego-validation by building an impenetrable edifice of logical truth. Unfortunately, identifying the parent’s cognitive biases does not stop their criticism. A more pleasant strategy is to recognize the dynamic, mourn the warping of childhood by the controlling parenting, set appropriate boundaries in the future, and draw validation from following one’s own goals instead of an internalized parent’s goals.
The Rationality Wars
Ever since Tversky and Kahneman started to gather evidence purporting to show that humans suffer from a large number of cognitive biases, other psychologists and philosophers have criticized these findings. For instance, philosopher L. J. Cohen argued in the 80's that there was something conceptually incoherent with the notion that most adults are irrational (with respect to a certain problem). By some sort of Wittgensteinian logic, he thought that the majority's way of reasoning is by definition right. (Not a high point in the history of analytic philosophy, in my view.) See chapter 8 of this book (where Gigerenzer, below, is also discussed).
Another attempt to resurrect human rationality is due to Gerd Gigerenzer and other psychologists. They have a) shown that if you tweak some of the heuristics and biases (i.e. the research program led by Tversky and Kahneman) experiments but a little - for instance by expressing probabilities in terms of frequencies - people make much fewer mistakes and b) argued, on the back of this, that the heuristics we use are in many situations good (and fast and frugal) rules of thumb (which explains why they are evolutionary adaptive). Regarding this, I don't think that Tversky and Kahneman ever doubted that the heuristics we use are quite useful in many situations. Their point was rather that there are lots of naturally occuring set-ups which fool our fast and frugal heuristics. Gigerenzer's findings are not completely uninteresting - it seems to me he does nuance the thesis of massive irrationality a bit - but his claims to the effect that these heuristics are rational in a strong sense are wildly overblown in my opnion. The Gigerenzer vs. Tversky/Kahneman debates are well discussed in this article (although I think they're too kind to Gigerenzer).
A strong argument against attempts to save human rationality is the argument from individual differences, championed by Keith Stanovich. He argues that the fact that some intelligent subjects consistently avoid to fall prey to the Wason Selection task, the conjunction fallacy, and other fallacies, indicates that there is something misguided with the notion that the answer that psychologists traditionally has seen as normatively correct is in fact misguided.
Hence I side with Tversky and Kahneman in this debate. Let me just mention one interesting and possible succesful method for disputing some supposed biases. This method is to argue that people have other kinds of evidence than the standard interpretation assumes, and that given this new interpretation of the evidence, the supposed bias in question is in fact not a bias. For instance, it has been suggested that the "false consensus effect" can be re-interpreted in this way:
The False Consensus Effect
Bias description: People tend to imagine that everyone responds the way they do. They tend to see their own behavior as typical. The tendency to exaggerate how common one’s opinions and behavior are is called the false consensus effect. For example, in one study, subjects were asked to walk around on campus for 30 minutes, wearing a sign board that said "Repent!". Those who agreed to wear the sign estimated that on average 63.5% of their fellow students would also agree, while those who disagreed estimated 23.3% on average.
Counterclaim (Dawes & Mulford, 1996): The correctness of reasoning is not estimated on the basis of whether or not one arrives at the correct result. Instead, we look at whether reach reasonable conclusions given the data they have. Suppose we ask people to estimate whether an urn contains more blue balls or red balls, after allowing them to draw one ball. If one person first draws a red ball, and another person draws a blue ball, then we should expect them to give different estimates. In the absence of other data, you should treat your own preferences as evidence for the preferences of others. Although the actual mean for people willing to carry a sign saying "Repent!" probably lies somewhere in between of the estimates given, these estimates are quite close to the one-third and two-thirds estimates that would arise from a Bayesian analysis with a uniform prior distribution of belief. A study by the authors suggested that people do actually give their own opinion roughly the right amount of weight.
(The quote is from an excellent Less Wrong article on this topic due to Kaj Sotala. See also this post by him, this by Andy McKenzie, this by Stuart Armstrong and this by lukeprog on this topic. I'm sure there are more that I've missed.)
It strikes me that the notion that people are "massively flawed" is something of an intellectual cornerstone of the Less Wrong community (e.g. note the names "Less Wrong" and "Overcoming Bias"). In the light of this it would be interesting to hear what people have to say about the rationality wars. Do you all agree that people are massively flawed?
Let me make two final notes to keep in mind when discussing these issues. Firstly, even though the heuristics and biases program is sometimes seen as pessimistic, one could turn the tables around: if they're right, we should be able to improve massively (even though Kahneman himself seems to think that that's hard to do in practice). I take it that CFAR and lots of LessWrongers who attempt to "refine their rationality" assume that this is the case. On the other hand, if Gigerenzer or Cohen are right, and we already are very rational, then it would seem that it is hard to do much better. So in a sense the latter are more pessimistic (and conservative) than the former.
Secondly, note that parts of the rationality wars seem to be merely verbal and revolve around how "rationality" is to be defined (tabooing this word is very often a good idea). The real question is not if the fast and frugal heuristics are in some sense rational, but whether there are other mental algorithms which are more reliable and effective, and whether it is plausible to assume that we could learn to use them on a large scale instead.
Some thoughts on relations between major ethical systems
On the recent LessWrong/CFAR Census Survey, I hit the following question:
Which of the following major ethical systems do you subscribe to:
1) Consequentialism
2) Deontology
3) Virtue Ethics
4) Other
To my own surprise, I couldn't come up with a clear answer. I certainly don't consistently apply one of these things across every decision I make in my life, and yet I consider myself at least mediocre on the scale of moral living, if not actually Neutral Good. So what is it I'm actually doing, and how can I behave more ethical-rationally?
Well, to analyze my own cognitive algorithms, I do think I can actually place these various codes of ethics in relation to each other. Basically, looked at behavioristically/algorithmically, they vary across how much predictive power I have, my knowledge of my own values, and what it is I'm actually trying to affect.
Consequentialism is the ethical algorithm I consider useful in situations of greatest predictive power and greatest knowledge of my own values. It is, so to speak, the ethical-algorithmic ideal. In such situations, the only drawback is that naive consequentialism fails to consider consequences on the person acting (ie: me). Once I make that more virtue-ethical adjustment, consequentialism offers a complete ideal for ethical action over a complete spectrum of moral values for affecting both the universe and myself (but I repeat: I'm part of the universe).
However, in almost all real situations, I don't have perfect predictive knowledge -- not of the "external" universe and not of my own values. In these situations, I can, however, use my incomplete and uncertain knowledge to find acceptable heuristics that I can expect to yield roughly monotonic behavior: follow those rules, and my actions will generally have positive effects. This kind of thinking quickly yields up recognizable, regular moral commandments like, "You will not murder" or "You will not charge interest above this-or-that amount on loans". Yes, of course we can come up with corner-case exceptions to those rules, and we can also elaborate logically on the rules to arrive at more detailed rules covering more circumstances. However, by the time we've fully elaborated out the basic commandments into a complete, obsessively-compulsively detailed legal code (oh hello Talmud), we've already covered most of the major general cases of moral action. We can now invent a criterion for how and when to transition from one level of ethical code to the one below it: our deontological heuristics should be detailed enough to handle any case where we lack the information (about consequences and values) to resort to consequentialism.
At first thought, virtue ethics seems like an even higher-level heuristic than deontological ethics. The problem is that, unlike deontological and consequentialist ethics, it doesn't output courses of action to take, but instead short- and long-term states of mind or character that can be considered virtuous. So we don't have the same thing here; it's not a higher-level heuristic but a seemingly completely different form of ethics. I do think we can integrate it, however: virtue ethics just consists of a set of moral values over one's own character. "What kind of person do I think is a good person?" might, by default, be a tautological question under strict consequentialism or deontology. However, when we take an account of the imperfect nature of real people (we are part of the universe, after all), we can observe that virtue ethics serves as a convenient guide to heuristics for becoming the sort of person who can be relied upon to take right actions when moral issues present themselves. Rather than simply saying, "Do the right thing no matter what" (an instruction that simply won't drive real human beings to actually do the right thing), virtue ethics encourages us to cultivate virtues, moral cognitive biases towards at least a deontological notion of right action.
It's also possible we might be able to separate virtue ethics into both heuristics over our own character, and actual values over our own character. These two approaches to virtue ethics should then converge in the presence of perfect information: if I knew myself utterly, my heuristics for my own character would exactly match my values over my own character.
This is my first effort at actually blogging on rationality subjects, so I'm hoping it's not covering something hashed and rehashed, over and over again, in places like the Sequences, of which I certainly can't attest a full knowledge.
[Link] Cognitive biases about violence as a negotiating tactic
Max Abrahms, "The Credibility Paradox: Violence as a Double-Edged Sword in International Politics," International Studies Quarterly 2013.
Abstract: Implicit in the rationalist literature on bargaining over the last half-century is the political utility of violence. Given our anarchical international system populated with egoistic actors, violence is thought to promote concessions by lending credibility to their threats. From the vantage of bargaining theory, then, empirical research on terrorism poses a puzzle. For non-state actors, terrorism signals a credible threat in comparison to less extreme tactical alternatives. In recent years, however, a spate of studies across disciplines and methodologies has nonetheless found that neither escalating to terrorism nor with terrorism encourages government concessions. In fact, perpetrating terrorist acts reportedly lowers the likelihood of government compliance, particularly as the civilian casualties rise. The apparent tendency for this extreme form of violence to impede concessions challenges the external validity of bargaining theory, as traditionally understood. In this study, I propose and test an important psychological refinement to the standard rationalist narrative. Via an experiment on a national sample of adults, I find evidence of a newfound cognitive heuristic undermining the coercive logic of escalation enshrined in bargaining theory. Due to this oversight, mainstream bargaining theory overestimates the political utility of violence, particularly as an instrument of coercion.
I found this via Bruce Schneier's blog, which frequently features very valuable analysis clustered around societal and computer security.
Another way our brains betray us
This appeared in the news yesterday.
http://www.alternet.org/media/most-depressing-discovery-about-brain-ever?paging=off
It turns out that in the public realm, a lack of information isn’t the real problem. The hurdle is how our minds work, no matter how smart we think we are. We want to believe we’re rational, but reason turns out to be the ex post facto way we rationalize what our emotions already want to believe.
...
The bleakest finding was that the more advanced that people’s math skills were, the more likely it was that their political views, whether liberal or conservative, made them less able to solve the math problem. [...] what these studies of how our minds work suggest is that the political judgments we’ve already made are impervious to facts that contradict us.
...
Denial is business-as-usual for our brains. More and better facts don’t turn low-information voters into well-equipped citizens. It just makes them more committed to their misperceptions.
...
When there’s a conflict between partisan beliefs and plain evidence, it’s the beliefs that win. The power of emotion over reason isn’t a bug in our human operating systems, it’s a feature.
Critiques of the heuristics and biases tradition
The chapter on judgment under uncertainty in the (excellent) new Oxford Handbook of Cognitive Psychology has a handy little section on recent critiques of the "heuristics and biases" tradition. It also discusses problems with the somewhat-competing "fast and frugal heuristics" school of thought, but for now let me just quote the section on heuristics and biases (pp. 608-609):
The heuristics and biases program has been highly influential; however, some have argued that in recent years the influence, at least in psychology, has waned (McKenzie, 2005). This waning has been due in part to pointed critiques of the approach (e.g., Gigerenzer, 1996). This critique comprises two main arguments: (1) that by focusing mainly on coherence standards [e.g. their rationality given the subject's other beliefs, as contrasted with correspondence standards having to do with the real-world accuracy of a subject's beliefs] the approach ignores the role played by the environment or the context in which a judgment is made; and (2) that the explanations of phenomena via one-word labels such as availability, anchoring, and representativeness are vague, insufficient, and say nothing about the processes underlying judgment (see Kahneman, 2003; Kahneman & Tversky, 1996 for responses to this critique).
The accuracy of some of the heuristics proposed by Tversky and Kahneman can be compared to correspondence criteria (availability and anchoring). Thus, arguing that the tradition only uses the “narrow norms” (Gigerenzer, 1996) of coherence criteria is not strictly accurate (cf. Dunwoody, 2009). Nonetheless, responses in famous examples like the Linda problem can be reinterpreted as sensible rather than erroneous if one uses conversational or pragmatic norms rather than those derived from probability theory (Hilton, 1995). For example, Hertwig, Benz and Krauss (2008) asked participants which of the following two statements is more probable:
[X] The percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.
[X&Y] The tobacco tax in Germany is increased by 5 cents per cigarette and the percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.
According to the conjunction rule, [X&Y cannot be more probable than X] and yet the majority of participants ranked the statements in that order. However, when subsequently asked to rank order four statements in order of how well each one described their understanding of X&Y, there was an overwhelming tendency to rank statements like “X and therefore Y” or “X and X is the cause for Y” higher than the simple conjunction “X and Y.” Moreover, the minority of participants who did not commit the conjunction fallacy in the first judgment showed internal coherence by ranking “X and Y” as best describing their understanding in the second judgment.These results suggest that people adopt a causal understanding of the statements, in essence ranking the probability of X, given Y as more probable than X occurring alone. If so, then arguably the conjunction “error” is no longer incorrect. (See Moro, 2009 for extensive discussion of the reasons underlying the conjunction fallacy, including why “misunderstanding” cannot explain all instances of the fallacy.)
The “vagueness” argument can be illustrated by considering two related phenomena: the gambler’s fallacy and the hot-hand (Gigerenzer & Brighton, 2009). The gambler’s fallacy is the tendency for people to predict the opposite outcome after a run of the same outcome (e.g., predicting heads after a run of tails when flipping a fair coin); the hot-hand, in contrast, is the tendency to predict a run will continue (e.g., a player making a shot in basketball after a succession of baskets; Gilovich, Vallone, & Tversky, 1985). Ayton and Fischer (2004) pointed out that although these two behaviors are opposite - ending or continuing runs - they have both been explained via the label “representativeness.” In both cases a faulty concept of randomness leads people to expect short sections of a sequence to be “representative” of their generating process. In the case of the coin, people believe (erroneously) that long runs should not occur, so the opposite outcome is predicted; for the player, the presence of long runs rules out a random process so a continuation is predicted (Gilovich et al., 1985). The “representativeness” explanation is therefore incomplete without specifying a priori which of the opposing prior expectations will result. More important, representativeness alone does not explain why people have the misconception that random sequences should exhibit local representativeness when in reality they do not (Ayton & Fischer, 2004).
My thanks to MIRI intern Stephen Barnes for transcribing this text.
[Link] Is the Endowment Effect Real?
Under fairly weak assumptions, the most a standard rational economic agent is willing to pay for an item they don't own (WTP) and the least they're willing to accept in exchange for that item if they already own it (WTA) should be identical. In experiments with humans, psychologists and economists have repeatedly found WTP-WTA gaps suggesting that humans aren't rational in at least this specific way. This has been interpreted as the endowment effect* and evidence for prospect theory. According to prospect theory, people are loss averse. Roughly this means that that, given their current ownership set, people value not losing stuff more highly than gaining stuff. Thus once someone gains ownership of something they suddenly value it much more highly. This "endowment effect"* on one's valuation of an item has been put forth as an explanation for the observed WTP - WTA gaps.
*Wikipedia confusingly defines the endowment effect as the gap itself, i.e. as the phenomena to be explained instead of the explanation. I suspect this is a difference in terminology among economists and psychologists, where psychologists use the wiki definition and economists use the definition I give here. However, calling the WTP-WTA gap an "endowment effect" is a bit misleading because a priori the gap may not have anything to endowments at all.
A paper (pdf) by Charlie Plott and Kathryn Zeiler investigates WTP-WTA gaps and it turns out that they may just be due to subjects not quite understanding the experimental protocols, particularly in the value elicitation process. Here's an important quote from their conclusion, but do read the paper for details:
The issue explored here is not whether a WTP-WTA gap can be observed. Clearly, the experiments of KKT and others show not only that gaps can be observed, but also that they are replicable. Instead, our interest lies in the interpretation of observed gaps. The primary conclusion derived from the data reported here is that observed WTP-WTA gaps do not reflect a fundamental feature of human preferences. That is, endowment effect theory does not seem to explain observed gaps. In addition, our results suggest that observed gaps should not be interpreted as support for prospect theory.
A review of the literature reveals that WTP-WTA gaps are not reliably observed across experimental designs. Given the nature of reported experimental designs, we posited that differences in experimental procedures might account for the differences across reported results. This conjecture prompted us to develop procedures to test for the robustness of the phenomenon. We conducted comparative experiments using procedures commonly used in studies that report observed gaps (i.e., KKT). We also employed a "revealed theory" methodology to identify procedures reported in the literature that provide clues about experimenter notions regarding subject misconceptions. We then conducted experiments that implemented the union of procedures used by experimentalists to control for subject misconceptions. The comparative experiments demonstrate that WTP-WTA gaps are indeed sensitive to experimental procedures. By implementing different procedures, the phenomenon can be turned on and off. When procedures used in studies that report the gap are employed, the gap is readily observed. When a full set of controls is implemented, the gap is not observed.
The fact that the gap can be turned on and off demonstrates that interpreting gaps as support for endowment effect theory is problematic. The mere observation of the phenomenon does not support loss aversion-a very special form of preferences in which gains are valued less than losses. That the phenomenon can be turned on and off while holding the good constant supports a strong rejection of the claim that WTP-WTA gaps support a particular theory of preferences posited by prospect theory. Loss aversion might in some sense characterize preferences, but such a theory most likely does not explain observed WTP-WTA gaps. Exactly what accounts for observed WTP-WTA gaps? The thesis of this paper is that observed gaps are symptomatic of subjects' misconceptions about the nature of the experimental task. The differences reported in the literature reflect differences in experimental controls for misconceptions as opposed to differences in the nature of the commodity (e.g., candy, money, mugs, lotteries, etc.) under study.
Think Like a Supervillain
See also: Everything I Needed To Know About Life, I Learned From Supervillains
Mr. Malfoy would hardly shrink from talk of ordinary murder, but even he was shocked - yes you were Mr. Malfoy, I was watching your face - when Mr. Potter described how to use his classmates' bodies as raw material. There are censors inside your mind which make you flinch away from thoughts like that. Mr. Potter thinks purely of killing the enemy, he will grasp at any means to do so, he does not flinch, his censors are off.
A while back, I claimed the Less Wrong username Quirinus Quirrell, and started hosting a long-running, approximate simulation of him in my brain. I have mostly used the account trivially - to play around with crypto-novelties, say mildly offensive things I wouldn't otherwise, and poke fun at Clippy. Several times I have doubted the wisdom of hosting such a simulation. Quirrell's values are not my own, and the plans that he generates (which I have never followed) are mostly bad when viewed in terms of my values. However, I have chosen to keep this occasional alter-identity, because he sees things that would otherwise be invisible to me.
I was once asked whether I would rather be a superhero or a supervillain, and I probably shouldn't tell you how little time it took for me to answer "supervillain."
Being a superhero sounds awful, at least if you intend to keep being recognized as a superhero. Superheroes are bound by the chains of public opinion. A superhero can only do what people generally agree is good for superheroes to do. If you stray too far off the beaten path in search of how best to use your superpowers to actually save the world, you could easily end up doing things that look, at first glance, somewhat to incredibly evil. And if people are going to turn against you once you start actually optimizing, you might as well just be a supervillain to begin with. They look like they're having more fun anyway.
You probably won't get the chance to decide between being a superhero or a supervillain, but you do get the chance to decide what kind of person you think of yourself as, and I think you should think of yourself more as a supervillain than as a superhero. Why?
In the same way that being a superhero limits what you can do, thinking of yourself as a superhero limits what you can think. And if you want to save the world, you can't afford to limit what you can think. Humanity faces many difficult problems, and the space of possible solutions to any one of these problems is large. If you have censors in your mind that are preventing you from looking at parts of this space because some of your moral intuitions don't like them ("that's not the kind of thing a superhero would do!"), you're crippling your ability to search for solutions to problems. For example, your moral intuitions are likely to flinch away from solutions to problems that involve you causing bad things to happen but be okay with solutions to problems that involve you failing to prevent bad things from happening (think of the trolley problem, or Batman's policy of not killing his enemies).
Edit (2/19): But thinking of yourself as a supervillain has the opposite effect. It's easier not to flinch at certain kinds of ideas, which now come more easily to mind and may not have otherwise occurred to you. For example, on Facebook, Eliezer recently mentioned a thread where people were posting examples of things that they valued at a billion dollars or more, such as their cats. With a supervillain module running in the background, I noticed and pointed out that this constituted a thread where people publicly described how they could be ransomed. I can't exactly test this, but I don't think this kind of idea would have occurred to me before I installed the supervillain module. (This is a tame example. I won't give less tame examples for obvious reasons.)
There are many things you can't say, but you don't have to say everything you think. Until someone discovers a technique for reliably reading human minds, think whatever thoughts best help you accomplish your goals without worrying about any moral labels they may or may not, upon reflection, ultimately warrant. Moral labels are for a later step in the decision process than the part where you generate ideas.
LW anchoring experiment: maybe
I do an informal experiment testing whether LessWrong karma scores are susceptible to a form of anchoring based on the first comment posted; a medium-large effect size is found although the data does not fit the assumed normal distribution & the more sophisticated analysis is equivocal, so there may or may not be an anchoring effect.
Full writeup on gwern.net at http://www.gwern.net/Anchoring
Noisy Reasoners
One of the more interesting papers at this year's AGI-12 conference was Finton Costello's Noisy Reasoners. I think it will be of interest to Less Wrong:
This paper examines reasoning under uncertainty in the case where the AI reasoning mechanism is itself subject to random error or noise in its own processes. The main result is a demonstration that systematic, directed biases naturally arise if there is random noise in a reasoning process that follows the normative rules of probability theory. A number of reliable errors in human reasoning under uncertainty can be explained as the consequence of these systematic biases due to noise. Since AI systems are subject to noise, we should expect to see the same biases and errors in AI reasoning systems based on probability theory.
Framing a problem in a foreign language seems to reduce decision biases
The researchers aren't entirely sure why speaking in a less familiar tongue makes people more "rational", in the sense of not being affected by framing effects or loss aversion. But they think it may have to do with creating psychological distance, encouraging systematic rather than automatic thinking, and with reducing the emotional impact of decisions. This would certainly fit with past research that's shown the emotional impact of swear words, expressions of love and adverts is diminished when they're presented in a less familiar language.
Paywalled article (can someone with access throw a PDF up on dropbox or something?): http://pss.sagepub.com/content/early/2012/04/18/0956797611432178
Blog summary: http://bps-research-digest.blogspot.co.uk/2012/06/we-think-more-rationally-in-foreign.html
[Video] Presentation on metacognition contains good intro to basic LW ideas
I attended a talk yesterday given under the auspices of the Ottawa Skeptics on the subject of "metacognition" or thinking about thinking -- basically, it was about core rationality concepts. It was designed to appeal to a broad group of lay people interested in science and consisted of a number of examples drawn from pop-sci books such as Thinking, Fast and Slow and Predictably Irrational. (Also mentioned: straw vulcans as described by CFAR's own Julia Galef.) If people who aren't familiar with LW ask you what LW is about, I'd strongly recommend pointing them to this video.
Here's the link.
'Thinking, Fast and Slow' Chapter Summaries / Notes [link]
I recently read Kahneman's 'Thinking Fast and Slow' (actually listened to the audiobook) and I wanted to find a summary of the experiments he describes and I stumbled upon this: http://sivers.org/book/ThinkingFastAndSlow. It has a summary of the interesting/important points of each chapter. Most of the statements seem to be direct quotes from the book, so if you have it in an electronic format (it can easily be obtained from uh, various sources) you can search for those quotes and find the context.
Bonus: Notes from Dan Ariely's Predictably Irrational and also many other books.
Utopian hope versus reality
I've seen an interesting variety of utopian hopes expressed recently. Raemon's "Ritual" sequence of posts is working to affirm the viability of LW's rationalist-immortalist utopianism, not just in the midst of an indifferent universe, but in the midst of an indifferent society. Leverage Research turn out to be social-psychology utopians, who plan to achieve their world of optimality by unleashing the best in human nature. And Russian life-extension activist Maria Konovalenko just blogged about the difficulty of getting people to adopt anti-aging research as the top priority in life, even though it's so obvious to her that it should be.
This phenomenon of utopian hope - its nature, its causes, its consequences, whether it's ever realistic, whether it ever does any good - certainly deserves attention and analysis, because it affects, and even afflicts, a lot of people, on this site and far beyond. It's a vast topic, with many dimensions. All my examples above have a futurist tinge to them - an AI singularity, and a biotech society where rejuvenation is possible, are clearly futurist concepts; and even the idea of human culture being transformed for the better by new ideas about the mind, belongs within the same broad scientific-technological current of Utopia Achieved Through Progress. But if we look at all the manifestations of utopian hope in history, and not just at those which resemble our favorites, other major categories of utopia can be observed - utopia achieved by reaching back to the conditions of a Golden Age; utopia achieved in some other reality, like an afterlife.
The most familiar form of utopia these days is the ideological social utopia, to be achieved once the world is run properly, according to the principles of some political "-ism". This type of utopia can cut across the categories I have mentioned so far; utopian communism, for example, has both futurist and golden-age elements to its thinking. The new society is to be created via new political forms and new philosophies, but the result is a restoration of human solidarity and community that existed before hierarchy and property... The student of utopian thought must also take note of religion, which until technology has been the main avenue through which humans have pursued their most transcendental hopes, like not having to die.
But I'm not setting out to study utopian thought and utopian psychology out of a neutral scholarly interest. I have been a utopian myself and I still am, if utopianism includes belief in the possibility (though not the inevitability) of something much better. And of course, the utopias that I have taken seriously are futurist utopias, like the utopia where we do away with death, and thereby also do away with a lot of other social and psychological pathologies, which are presumed to arise from the crippling futility of the universal death sentence.
However, by now, I have also lived long enough to know that my own hopes were mistaken many times over; long enough to know that sometimes the mistake was in the ideas themselves, and not just the expectation that everyone else would adopt them; and long enough to understand something of the ordinary non-utopian psychology, whose main features I would nominate as reconciliation with work and with death. Everyone experiences the frustration of having to work for a living and the quiet horror of physiological decline, but hardly anyone imagines that there might be an alternative, or rejects such a lifecycle as overall more bad than it is good.
What is the relationship between ordinary psychology and utopian psychology? First, the serious utopians should recognize that they are an extreme minority. Not only has the whole of human history gone by without utopia ever managing to happen, but the majority of people who ever lived were not utopians in the existentially revolutionary sense of thinking that the intolerable yet perennial features of the human condition might be overthrown. The confrontation with the evil aspects of life must usually have proceeded more at an emotional level - for example, terror that something might be true, and horror at the realization that it is true; a growing sense that it is impossible to escape; resignation and defeat; and thereafter a permanently diminished vitality, often compensated by achievement in the spheres of work and family.
The utopian response is typically made possible only because one imagines that there is a specific alternative to this process; and so, as ideas about alternatives are invented and circulated, it becomes easier for people to end up on the track of utopian struggle with life, rather than the track of resignation, which is why we can have enough people to form social movements and fundamentalist religions, and not just isolated weirdos. There is a continuum between full radical utopianism and very watered-down psychological phenomena which hardly deserve that name, but still have something in common - for example, a person who lives an ordinary life but draws some sustenance from the possibility of an afterlife of unspecified nature, where things might be different, and where old wrongs might be righted - but nonetheless, I would claim that the historically dominant temperament in adult human experience has been resignation to hopelessness and helplessness in ultimate matters, and an absorption in affairs where some limited achievement is possible, but which in themselves can never satisfy the utopian impulse.
The new factor in our current situation is science and technology. Our modern history offers evidence that the world really can change fundamentally, and such further explosive possibilities as artificial intelligence and rejuvenation biotechnology are considered possible for good, tough-minded, empirical reasons, not just because they offer a convenient vehicle for our hopes.
Technological utopians often exhibit frustration that their pet technologies and their favorite dreams of existential emancipation aren't being massively prioritized by society, and they don't understand why other people don't just immediately embrace the dream when they first hear about it. (Or they develop painful psychological theories of why the human race is ignoring the great hope.) So let's ask, what are the attitudes towards alleged technological emancipation that a person might adopt?
One is the utopian attitude: the belief that here, finally, one of the perennial dreams of the human race can come true. Another is denial: which is sometimes founded on bitter experience of disappointment, which teaches that the wise thing to do is not to fool yourself when another new hope comes up to you and cheerfully asserts that this time really is different. Another is to accept the possibility but deny the utopian hope. I think this is the most important interpretation to understand.
It is the one that precedent supports. History is full of new things coming to pass, but they have never yet led to utopia. So we might want to scrutinize our technological projections more closely, and see whether the utopian expectation is based on overlooking the downside. For example, let us contrast the idea of rejuvenation and the idea of immortality - not dying, ever. Just because we can take someone who is 80 and make them biologically 20, is not the same thing as making them immortal. It just means that won't die of aging, and that when they do die, it will be in a way befitting someone 20 years old. They'll die in an accident, or a suicide, or a crime. Incidentally, we should also note an element of psychological unrealism in the idea of never wanting to die. Forever is a long time; the whole history of the human race is about 10,000 years long. Just 10,000 years is enough to encompass all the difficulties and disappointments and permutations of outlook that have ever happened. Imagine taking the whole history of the human race into yourself; living through it personally. It's a lot to have endured.
It would be unfair to say that transhumanists as a rule are dominated by utopian thinking. Perhaps just as common is a sort of futurological bipolar disorder, in which the future looks like it will bring "utopia or oblivion", something really good or something really bad. The conservative wisdom of historical experience says that both these expectations are wrong; bad things can happen, even catastrophes, but life keeps going for someone - that is the precedent - and the expectation of total devastating extinction is just a plunge into depression as unrealistic as the utopian hope for a personal eternity; both extremes exhibiting an inflated sense of historical or cosmic self-importance. The end of you is not the end of the world, says this historical wisdom; imagining the end of the whole world is your overdramatic response to imagining the end of you - or the end of your particular civilization.
However, I think we do have some reason to suppose that this time around, the extremes are really possible. I won't go so far as to endorse the idea that (for example) intelligent life in the universe typically turns its home galaxy into one giant mass of computers; that really does look like a case of taking the concept and technology with which our current society is obsessed, and projecting it onto the cosmic unknown. But just the humbler ideas of transhumanity, posthumanity, and a genuine end to the human-dominated era on Earth, whether in extinction or in transformation. The real and verifiable developments of science and technology, and the further scientific and technological developments which they portend, are enough to justify such a radical, if somewhat nebulous, concept of the possible future. And again, while I won't simply endorse the view that of course we shall get to be as gods, and shall get to feel as good as gods might feel, it seems reasonable to suppose that there are possible futures which are genuinely and comprehensively better than anything that history has to offer - as well as futures that are just bizarrely altered, and futures which are empty and dead.
So that is my limited endorsement of utopianism: In principle, there might be a utopianism which is justified. But in practice, what we have are people getting high on hope, emerging fanaticisms, personal dysfunctionality in the present, all the things that come as no surprise to a cynical student of history. The one outcome that would be most surprising to a cynic is for a genuine utopia to arrive. I'm willing to say that this is possible, but I'll also say that almost any existing reference to a better world to come, and any psychological state or social movement which draws sublime happiness from the contemplation of an expected future, has something unrealistic about it.
In this regard, utopian hope is almost always an indicator of something wrong. It can just be naivete, especially in a young person. As I have mentioned, even non-utopian psychology inevitably has those terrible moments when it learns for the first time about the limits of life as we know it. If in your own life you start to enter that territory for the first time, without having been told from an early age that real life is fundamentally limited and frustrating, and perhaps with a few vague promises of hope, absorbed from diverse sources, to sustain you, then it's easy to see your hopes as, not utopian hopes, but simply a hope that life can be worth living. I think this is the experience of many young idealists in "environmental" and "social justice" movements; their culture has always implied to them that life should be a certain way, without also conveying to them that it has never once been that way in reality. The suffering of transhumanist idealists and other radical-futurist idealists, when they begin to run aground on the disjunction between their private subcultural expectations and those of the culture at large, has a lot in common with the suffering of young people whose ideals are more conventionally recognizable; and it is entirely conceivable that for some generation now coming up, rebellion against biological human limitations will be what rebellion against social limitations has been for preceding generations.
I should also mention, in passing, the option of a non-utopian transhumanism, something that is far more common than my discussion so far would mention. This is the choice of people who expect, not utopia, but simply an open future. Many cryonicists would be like this. Sure, they expect the world of tomorrow to be a great place, good enough that they want to get there; but they don't think of it as an eternal paradise of wish-fulfilment that may or may not be achieved, depending on heroic actions in the present. This is simply the familiar non-utopian view that life is overall worth living, combined with the belief that life can now be lived for much longer periods; the future not as utopia, but as more history, history that hasn't happened yet, and which one might get to personally experience. If I was wanting to start a movement in favor of rejuvenation and longevity, this is the outlook I would be promoting, not the idea that abolishing death will cure all evils (and not even the idea that death as such can be abolished; rejuvenation is not immortality, it's just more good life). In the spectrum of future possibilities, it's only the issue of artificial intelligence which lends some plausibility to extreme bipolar futurism, the idea that the future can be very good (by human standards) or very bad (by human standards), depending on what sort of utility functions govern the decision-making of transhuman intelligence.
That's all I have to say for now. It would be unrealistic to think we can completely avoid the pathologies associated with utopian hope, but perhaps we can moderate them, if we pay attention to the psychology involved.
[link] Anger as antidote to Confirmation Bias
The current research explores the effect of anger on hypothesis confirmation — the propensity to seek information that confirms rather than disconfirms one’s opinion. We argue that the moving against action tendency associated with anger leads angry individuals to seek out disconfirming evidence, attenuating the confirmation bias. We test this hypothesis in two studies of experimentally-primed anger and sadness on the selective exposure to hypothesis confirming and disconfirming information. In Study 1, participants in the angry condition were more likely to choose disconfirming information than those in the sad or neutral condition when given the opportunity to read about a controversial social issue. Study 2 measured participants’ opinions and information selection about the 2008 Presidential Election and the desire to ‘move against’ a person or object. Participants in the angry condition reported a greater tendency to oppose a person or object, and this tendency led them to select more disconfirming information.
Simple theory of IMDB bias
IMDB top 250 list is dominated by old movies, which conflicts with my perception (shared by majority of people as far as I can tell) that new movies are far better than old movies (comparing either top with top or average with average).
I have a simple theory why IMDB is wrong:
- For new movies, very wide population have seen it, many not fans of the genre. They vote on IMDB soon after watching.
- For old movies, only narrow population of fans have seen it recently. The only people who vote on IMDB are those who've seen it recently (atypical fans), or have particularly good memories of it (atypical fans again). People who watched an old movie ages ago but don't remember much about it are very unlikely to vote on IMDB.
- Therefore it's much more difficult for a new movie to get a good IMDB score than it is for an old movie.
- Therefore a new movie with identical IMDB store is likely much better than an old movie with identical score.
You Are Not So Smart (Pop-Rationality Book)
Journalist David McRaney has very recently published a popular book on human rationality. The book, You Are Not So Smart, is currently the 3rd best selling book in Nonfiction/Philosophy on Amazon.com after less than a week on the market. (Eighth best selling book in Nonfiction/Education)
The tag-line of the project is: "A celebration of self-delusion." As such the book seems less an attempt at giving advice on how to act and decide, than an attempt to reveal, chapter by chapter, the folly of common sense.
Topics include: Hindsight Bias, Confirmation bias, The Sunk Cost Fallacy, Anchoring Effect, The Illusion of Transparency, The Just World Fallacy, Representativeness Heuristic, The Perils of Introspection, The Dunning-Kruger Effect, The Monty Hall Problem, The Bystander Effect, Placebo Buttons, Groupthink, Conformity, Social Loafing, Helplessness, Cults, Change Blindness, Self-Fulfilling Prophecies, Self Handicapping, Availability Heuristic, Self-Serving Bias, The Ultimatum Game, Inattentional Blindness.
These are topics we enjoy learning about, pride ourself in knowing a lot about, and, we profess, we would want more people to know about. A popular book on this subject is now out. This sounds like a good thing.
I will note that the blog features at least one direct quote from LessWrong.
We always know what we mean by our words, and so we expect others to know it too. Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant. It’s hard to empathise with someone who must interpret blindly, guided only by the words.
- Eliezer Yudowsky from Lesswrong.com
One one hand, You Are Not So Smart could bee a boon to Eliezer's popular rationality book by priming the market. His writings on a given topic have rarely been described as redundant. On the other hand, it seems to me that this book closely covers a number of topics, seemingly in a similar style to the treatments that were published on this site and Overcoming Bias. Intended to be published in book form at a later date. I will try to refrain from speculation here.
Sample blook chapters from YouAreNotSoSmart:
I'll save the rest of my review until I have actually read the book.
In the meantime I would like to know your thoughts on this project.
Weak supporting evidence can undermine belief
Article: Weak supporting evidence can undermine belief in an outcome
Defying logic, people given weak evidence can regard predictions supported by that evidence as less likely than if they aren’t given the evidence at all.
...
Consider the following statement: “Widespread use of hybrid and electric cars could reduce worldwide carbon emissions. One bill that has passed the Senate provides a $250 tax credit for purchasing a hybrid or electric car. How likely is it that at least one-fifth of the U.S. car fleet will be hybrid or electric in 2025?”
That middle sentence is the weak evidence. People presented with the entire statement — or similar statements with the same three-sentence structure but on different topics — answered the final question lower than people who read the statement without the middle sentence. They did so even though other people who saw the middle statement in isolation rated it as positive evidence for, in this case, higher adoption of hybrid and electric cars.
Paper: When good evidence goes bad: The weak evidence effect in judgment and decision-making
Abstract:
An indispensable principle of rational thought is that positive evidence should increase
belief. In this paper, we demonstrate that people routinely violate this principle when pre-
dicting an outcome from a weak cause. In Experiment 1 participants given weak positive
evidence judged outcomes of public policy initiatives to be less likely than participants
given no evidence, even though the evidence was separately judged to be supportive.
Experiment 2 ruled out a pragmatic explanation of the result, that the weak evidence
implies the absence of stronger evidence. In Experiment 3, weak positive evidence made
people less likely to gamble on the outcome of the 2010 United States mid-term Congres-
sional election. Experiments 4 and 5 replicated these findings with everyday causal
scenarios. We argue that this ‘‘weak evidence effect’’ arises because people focus dispro-
portionately on the mentioned weak cause and fail to think about alternative causes.
Priming with Hypothetical questions
I came across this article this morning via a blog post from
http://solutionfocusedchange.blogspot.com/.
http://sd1.myipcn.org/science/article/pii/S0749597811001099
"Wolves in sheep’s clothing: How and when hypothetical questions influence behavior" by Sarah G. Moore and others. Full article unfortunately unavailable for free.
"We examine how and when hypothetical questions influence judgment and behavior.
Hypotheticals increase the accessibility of the positive or negative information in the question.
Thus, hypotheticals influence behavior according to the valence of the question.
Hypotheticals exert a stronger influence when they are consistent with existing knowledge.
Hypotheticals exert a weaker influence when individuals are aware of their impact."
I think this is a deliberate and obvious application of psychological priming, where we are biased to interpret events, through exposure to positive or negative tone words.
Hypotheticals frame the context of the discussion, and require to you use the hard path of cognition to think in a different way. They are a source of error in social science surveys, and are often used by marketers and political pollsters to lead our response.
I'd like to read the full paper to find out what sort of experimental method they used.
Table of cognitive tasks that do and do not show correlations with cognitive ability
Here. From this 2010 book chapter by Stanovich, Toplak, and West. (Here is the book.)
See also Baron's table of cognitive biases, the normative models they violate, and their explanations.
How to be Deader than Dead
For your consideration, a psychology study as summarized by The Economist in "How dead is dead? Sometimes, those who have died seem more alive than those who have not":
"They first asked 201 people stopped in public in New York and New England to answer questions after reading one of three short stories. In all three, a man called David was involved in a car accident and suffered serious injuries. In one, he recovered fully. In another, he died. In the third, his entire brain was destroyed except for one part that kept him breathing. Although he was technically alive, he would never again wake up.
...each participant was asked to rate David’s mental capacities, including whether he could influence the outcome of events, know right from wrong, remember incidents from his life, be aware of his environment, possess a personality and have emotions. Participants used a seven-point scale to make these ratings, where 3 indicated that they strongly agreed that he could do such things...and -3 indicated that they strongly disagreed.
...the fully recovered David rated an average of +1.77 and the dead David -0.29. That score for the dead David was surprising enough, suggesting as it did a considerable amount of mental acuity in the dead. What was extraordinary, though, was the result for the vegetative David: -1.73. In the view of the average New Yorker or New Englander, the vegetative David was more dead [-1.73] than the version who was dead [-0.29].
...they ran a follow-up experiment which had two different descriptions of the dead David. One said he had simply passed away. The other directed the participant’s attention to the corpse. It read, “After being embalmed at the morgue, he was buried in the local cemetery. David now lies in a coffin underground.”...In this follow-up study participants were also asked to rate how religious they were.
Once again, the vegetative David was seen to have less mind than the David who had “passed away”. This was equally true, regardless of how religious a participant said he was. However, ratings of the dead David’s mind in the story in which his corpse was embalmed and buried varied with the participant’s religiosity. Irreligious participants gave the buried corpse about the same mental ratings as the vegetative patient (-1.51 and -1.64 respectively). Religious participants, however, continued to ascribe less mind to the irretrievably unconscious David than they did to his buried corpse (-1.57 and 0.59).
That those who believe in an afterlife ascribe mental acuity to the dead is hardly surprising. That those who do not are inclined to do so unless heavily prompted not to is curious indeed."
The study is "More dead than dead: Perceptions of persons in the persistent vegetative state":
Patients in persistent vegetative state (PVS) may be biologically alive, but these experiments indicate that people see PVS as a state curiously more dead than dead. Experiment 1 found that PVS patients were perceived to have less mental capacity than the dead. Experiment 2 explained this effect as an outgrowth of afterlife beliefs, and the tendency to focus on the bodies of PVS patients at the expense of their minds. Experiment 3 found that PVS is also perceived as “worse” than death: people deem early death better than being in PVS. These studies suggest that people perceive the minds of PVS patients as less valuable than those of the dead – ironically, this effect is especially robust for those high in religiosity.
Ed Yong points to another interesting study, the 2004 "The natural emergence of reasoning about the afterlife as a developmental regularity":
Participants were interviewed about the biological and psychological functioning of a dead agent. In Experiment 1, even 4- to 6-year-olds stated that biological processes ceased at death, although this trend was more apparent among 6- to 8-year-olds. In Experiment 2, 4- to 12-year-olds were asked about psychological functioning. The youngest children were equally likely to state that both cognitive and psychobiological states continued at death, whereas the oldest children were more likely to state that cognitive states continued. In Experiment 3, children and adults were asked about an array of psychological states. With the exception of preschoolers, who did not differentiate most of the psychological states, older children and adults were likely to attribute epistemic, emotional, and desire states to dead agents. These findings suggest that developmental mechanisms underlie intuitive accounts of dead agents' minds
Jach on Hacker News makes the obvious connection with cryonics; see also lukeprog's "Remind Physicalists They're Physicalists".
Table of biases, the normative models they violate, and their explanations
The title says it all: PDF. From Baron's Thinking and Deciding, 4th edition.
A study in Science on memory conformity
I believe this may be a good addition to the cognitive bias literature:
Following the Crowd: Brain Substrates of Long-Term Memory Conformity
- Micah Edelson1,*,
- Tali Sharot2,
- Raymond J. Dolan2,
- Yadin Dudai1
1Department of Neurobiology, Weizmann Institute of Science, Israel.
- 2Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London, UK.
ABSTRACT
Human memory is strikingly susceptible to social influences, yet we know little about the underlying mechanisms. We examined how socially induced memory errors are generated in the brain by studying the memory of individuals exposed to recollections of others. Participants exhibited a strong tendency to conform to erroneous recollections of the group, producing both long-lasting and temporary errors, even when their initial memory was strong and accurate. Functional brain imaging revealed that social influence modified the neuronal representation of memory. Specifically, a particular brain signature of enhanced amygdala activity and enhanced amygdala-hippocampus connectivity predicted long-lasting but not temporary memory alterations. Our findings reveal how social manipulation can alter memory and extend the known functions of the amygdala to encompass socially mediated memory distortions.
Biases to watch out for while job hunting?
I'm in the process of searching for a new job. I'm currently employed, but I'm dissatisfied with my salary and career growth options. I've done a couple of phone interviews and one face-to-face interview already, with several others lined up next week. The face-to-face interview went well, and I'm anticipating an offer from them next week. However, while considering how I would evaluate that offer, I caught myself awarding them points in reciprocation for their implicit praise in singling me out as a worthy candidate. Now I'm wondering what other biases I might be falling prey to in this process. Thoughts?
Examine success as much as failure
Harvard Business Review has posted something right up our alley: "Why Leaders Don't Learn From Success"
Also, the HBR essay links to a similar discussion of how Pixar avoids being brainwashed by its own success (something I had always wondered about - they seem too consistently successful): "How Pixar Fosters Collective Creativity".
View more: Next
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
For more material, here's a list of all posts at youarenotsosmart.com