[Link] Eleven dogmas of analytic philosophy
Closely related to some of Luke's recent discussions about philosophy, philosopher Paul Thagard has recently called for changes to the way we do philosophy:
I prefer an alternative approach to philosophy that is much more closely tied to scientific investigations. This approach is sometimes called “naturalistic philosophy” or “philosophy naturalized”, but I like the more concise term natural philosophy. Before the words “science” and “scientist” became common in the nineteenth century, researchers such as Newton described what they did as natural philosophy. I propose to revive this term to cover a method that ties epistemology and ethics closely to the cognitive sciences, and ties metaphysics closely to physics and other sciences.
In the same article, Thagard also lists eleven areas where modern philosophy goes awry. For example:
3. People’s intuitions are evidence for philosophical conclusions. Natural alternative: evaluate intuitions critically to determine their psychological causes, which are often more tied to prejudices and errors than truth. Don't trust your intuitions.
Source: Philosopher, Paul Thagard
Help me make SI's arguments clear
One of the biggest problems with evaluating the plausibility of SI's arguments is that the arguments involve a large number of premises (as any complex argument will) and often these arguments are either not written down or are written down in disparate locations, making it very hard to piece together these claims. SI is aware of this and one of their major aims is to state their argument very clearly. I'm hoping to help with this aim.
My specific plan is as follows: I want to map out the broad structure of SI's arguments in "standard form" - that is, as a list of premises that support a conclusion. I then want to write this up into a more readable summary and discussion of SI's views.
The first step to achieving this is making sure that I understand what SI is arguing. Obviously, SI is arguing for a number of different things but I take their principle argument to be the following:
P1. Superintelligent AI (SAI) is highly likely to be developed in the near future (say, next 100 years and probably sooner)
P2. Without explicit FAI research, superintelligent AI is likely to pose a global catastrophic risk for humanity.
P3. FAI research has a reasonable chance of making it so that superintelligent AI will not pose a global catastrophic risk for humanity.
Therefore
C1. FAI research has a high expected value for humanity.
P4. We currently fund FAI research at a level below that supported by its expected value.
Therefore
C2. Humanity should expend more effort on FAI research.
Note that P1 in this argument can be weakened to simply say that SAI is a non-trivial possibility but, in response, a stronger version of P2 and P3 are required if the conclusion is still to be viable (that is, if SAI is less likely, it needs to be more dangerous or FAI research needs to be more effective in order for FAI research to have the same expected value). However, if P2 and P3 already seem strong to you, then the argument can be made more forceful by weakening P1. One further note, however, doing so might also make the move from C1 and P4 to C2 more open to criticism - that is, some people think that we shouldn't make decisions based on expected value calculations when we are talking about low probability/high value events.
So I'm asking for a few things from anyone willing to comment:
1.) A sense of whether this is a useful project (I'm very busy and would like to know whether this is a suitable use of my scarce spare time) - I will take upvotes/downvotes as representing votes for or against the idea (so feel free to downvote me if you think this idea isn't worth pursuing even if you wouldn't normally downvote this post).
2.) A sense of whether I have the broad structure of SI's basic argument right.
In terms of my commitment to this project: as I said before, I'm very busy so I don't promise to finish this project. However, I will commit to notifying Less Wrong if I give in on it and engaging in handover discussions with anyone that wants to take the project over.
Practical debiasing
Some of this post is an expansion of topics covered by Lukeprog here
1. Knowing about biases (doesn't stop you being biased)
Imagine you had to teach a course that would help people to become less biased. What would you teach? A natural idea, tempting enough in theory, might be that you should teach the students about all of the biases that influence their decision making. Once someone knows that they suffer from overconfidence in their ability to predict future events, surely they will adjust their confidence accordingly.
Readers of Less Wrong will be aware that it's more complicated than that.
There is a mass of research showing that knowing about cognitive biases does not stop someone from being biased. Quattrone et. al. (1981) showed that anchoring effects are not decreased by instructing subjects to avoid the bias. Similarly, Pohl et. al. (1996) demonstrate that the same applies to the hindsight bias. Finally, Arzy et al (2009) showed that including a misleading detail in a description of a medical case significantly decreased diagnostic accuracy. Accuracy does not improve if doctors are warned that such information may be present.
2. Consider the opposite (but not too much)
So what does lead to debiasing? As Lukeprog mentioned one well supported tactic is that of "consider the opposite", which involves simply considering some reasons that an initial judgment might be incorrect. This has been shown to help counter overconfidence and hindsight bias as well as anchoring. See, for example, Arkes (1991) or Mussweiler et. al. (2000) for studies along this line.
There are two more things worth noting about this tactic. The first is that Soll and Klayman (2004) have demonstrated that a related tactic has positive results in relation to overconfidence. In their experiment, Soll and Klayman asked subjects to give an interval such that they are 80% sure that the answer to a question lay within this interval. So they asked for predictions of things like the birth year of Oliver Cromwell and the subjects would need to provide an early year and a late year such that they were 80% sure that Cromwell was born somewhere between there two years. These subjects exhibited substantial overconfidence - they were right far less than 80% of the time.
However, another group of subjects were asked two questions. For the first, they were asked to pick a year such that they were 90% sure Cromwell wasn't born before this year. For the second, they were asked to pick a year such that they were 90% sure that Cromwell wasn't born after this year. Subjects still displayed overconfidence in response to this question but to a far more minor extent. But the two questions are equivalent (eta: though see this comment)! Being forced to consider arguments for both ends of the interval seemed to lead to more accurate prediction. Further studies have attempted to improve on this result through more sophisticated tactics along the same lines (see, for example, Andrew Speirs-Bridge et. al., 2009)
The second thing worth noting is that considering too many reasons that an initial judgement might be incorrect is counterproductive (see Roese, 2004 or Sanna et. al. 2002). After a certain point, it becomes increasingly difficult for a person to generate reasons they might have been incorrect. This then serves to convince them that their idea must be right, otherwise it would be easier to come up with reasons against the claim. At this point, the technique ceases to have a debiasing effect. While the exact number of reasons that one should consider is likely to differ from case to case, Sanna et. al. (2002) found a debiasing effect when subjects were asked to consider 2 reasons against their initial conclusion but not when they were asked to consider 10. Consequently, it seems plausible that the ideal number of arguments to consider will be closer to 2 than 10.
So consider the opposite but not too much.
3. Provide reasons
There is also evidence that providing reasons for your decision or judgement can help to mitigate biases. Arkes et. al. (1988) demonstrated that, in relation to hindsight bias, asking for a rationale for a judgement can help debias that judgement.
Similar research has been demonstrated in relation to framing effects. Miller and Fagley (1991) presented participants with a series of scenarios about how to respond to a disease outbreak. One group was then presented with a positive frame while one was presented with a negative frame. This framing influenced the program of response that the participants selected. In other words, those in the negative frame group selected responses with a different frequency to those in the positive frame group despite the scenario being the same. However, if the groups were asked to provide a reason for their decision, then both groups selected responses at about the same frequency (However, Sieck and Yates (1997) demonstrated that this approach does not work in relation to all types of framing questions).
So provide reasons for your decisions.
4. Get some training
There is also evidence that some biases can be trained away. Specifically, Larrick et. al. (1990) has shown that the sunk cost fallacy can be avoided by training and Fong et. al. (1986) has presented similar research with regards to judgements about sample variability.
Larrick (2004) claims that this training is most effective when an abstract principle is taught along with concrete examples. He also suggested that the training should involve examples showing how the principle works in context. The process of training involves not just learning the rule but also figuring out when to apply it and then (hopefully) coming to apply it automatically.
This seems like the sort of thing that could potentially be run in the discussion section of Less Wrong or at face to face meetups.
5. Reference class forecasting
The final technique I want to discuss is reference class forecasting which has been discussed by both Robin and Eliezer. On Less Wrong, this topic is often discussed in terms of the inside and the outside view. Reference class forecasting is basically the idea that in predicting how long a project should take, one should not try to figure out how long each component of the project will take but should instead ask how long it has taken you (or others) to complete similar tasks in the past.
This approach has been shown to be effective in overcoming the planning fallacy. For example, Osberg and Shrauger (1986) demonstrated that those instructed to consider their performance in similar cases in the past were better able to predict their performance in new projects.
So in predicting how long a task will take, use the outside not the inside view.
6. Concluding remarks
I'm sure there's nothing here that will surprise most Less Wrong readers but I hope that having it all together in one place is useful. For anyone who's interested, I got a lot of the information for this post from Richard P. Larrick's article, 'Debiasing' in the Blackwell Handbook of Judgment and Decision Making which is a good book all round.
References
Arkes, H.R. 1991, 'Costs and benefits of judgement errors: Implications for debiasing', Psychological Bulletin, vol. 110, no. 3, pp. 486-498
Arkes, H.R., Faust, D., Guilmette, T.J., & Hart, K. 1988, 'Eliminating the Hindsight Bias', Journal of Applied Psychology, vol. 73, pp. 305-307
Fong, G. T., Krantz, D. H., & Nisbett, R. E. 1986, 'The effects of statistical training on thinking about everyday problems.', Cognitive Psychology, 18, 253-292.
Larrick, R.P. 2004, 'Debiasing', in Blackwell Handbook of Judgment and Decision Making, Blackwell Publishing, Oxford, pp. 316-337.
Miller, P.M. & Fagley, N.S. 1991, 'The Effects of Framing, Problem Variations, and Providing Rationale on Choice', Personality and Social Psychology Bulletin, vol. 17, no. 5, pp. 517-522.
Mussweiler, T. Strack, F. & Pfeiffer, T. 2000, 'Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility', Personality and Social Psychology Bulletin, vol. 26, no. 9, pp. 1142-1150
Osberg, T. M., & Shrauger, J. S. 1986, 'Self-prediction: Exploring the parameters of accuracy', Journal of Personality
and Social Psychology, vol. 51,no. 5, pp. 1044-1057.
Pohl, R.F. & Hell, W. 1996, 'No reduction in Hindsight Bias after Complete Information and repeated Testing', Organizational Behaviour and Human Decision Processes, vol. 67, no. 1, pp. 49-58.
Quattrone, G.A. Lawrence, C.P. Finkel, S.E. & Andrus, D.C. 1981, Explorations in anchoring: The effects of prior range, anchor extremity, and suggestive hints. Manuscript, Stanford University.
Roese, N.J. 2004, 'Twisted Pair: Counterfactual Thinking and the Hindsight Bias', in Blackwell Handbook of Judgment and Decision Making, Blackwell Publishing, Oxford, pp. 258-273.
Sanna, L.J., Schwarz, N., Stocker, S.L. 2002, 'When Debiasing Backfires: Accessible Content and Accessibility Experiences in Debiasing Hindsight', Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 28, no. 3, pp. 497-502.
Soll, J.B. & Klayman, J. 2004, 'Overconfidence in Interval Estimates', Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 30, no. 2, pp. 299-314
Speirs-Bridge, A., Fidler, F., McBride, M., Flander, L., Cumming, G. & Burgman, M. 2009, 'Reducing overconfidence in the interval judgements of experts', Risk Analysis, vol. 30, no. 3, pp. 512 – 523
Living bias, not thinking bias
1. Biases, those traits which affect everyone but me
I recently had the opportunity to run an exercise on bias and rationality with a group of (fellow) university students. I wasn't sure it was going to go down well. There's one response that always haunts me when it comes to introducing bias: That's an interesting description of other people but it doesn't describe me.
I can't remember the details (and haven't been able to track them down), but I once read about an experiment on some bias, let's say it was hindsight bias. The research team carried out a standard experiments which showed that the participants were biased as expected. After, they told these participants about hindsight bias. Most of the participants thought this was interesting and probably explained the actions of other people in the experiment but they didn't think it explained their own actions.
So going into the presentation, this is what I was worried about: People thinking these biases were just abstract and didn't affect them.
Then at the end, everyone's comments made it clear that this wasn't the case. They really had realised that these were biases which affected them. The question then is, what led them to reach this conclusion?
2. Living history, living bias
All of the other planets (and the Earth) orbit the Sun. Once upon a time, we didn't believe this: We thought that the these planets (and the Sun) orbited the Earth.
Imagine that you're alive all that time ago, when the balance of evidence has just swung so that it favours the theory that the planets orbit the Sun. However, at the time, you steadfastly insist that they orbit the Earth. Why? Because your father told you it did when you were a child and you always believe things your father told you. Then a friend explains all of the evidence in favour of the theory that the planets orbit the Sun. Eventually, you realise that you were mistaken all along and, at the same time, you realise something else: You realise that it was a mistake not to question a belief just because your father endorsed it.
If you think about history, you learn what beliefs were wrong. If you live history, you learn this and then you also learn what it feels like to mistakenly endorse an incorrect belief. Maybe next time it occurs then, you can avoid making the same error.
In teaching people about biases, I think its best to help students to live biases and not just think about them. That way, they'll know what it feels like to be biased and they'll know that they are biased.
3. Rationality puzzles
One of the best ways to do this, and the technique I used in my presentation, seems to be to use of rationality puzzles. Basically, these are puzzles where the majority of respondents tend to reason in a biased or fallacious way. Run a few of these puzzles and most students will reason incorrectly in at least one of them. This means them a chance to experience being biased. If lessons focused on an abstract presentation biases instead, the student would think about the bias but not live it in the same way.
So on example rationality puzzle is the 2, 4, 6 task. When I ran this exercise for my presentation, I broke the group up into pairs and made one member of each pair the questioner and the other the respondent.
The respondent was given a slip of paper containing a number rule written upon it. This was a rule that a sequence of three numbers could either meet or fail to meet. I won't mention what the rule was yet, to give those who haven't come across the puzzle a chance to think about how they would proceed.
The questioner's job was to guess this rule. They were given one clue: The sequence 2, 4, 6 met the rule. The questioner was then allowed to ask whether other three number sequences met the rule and the respondent would let them know if it did. The questioner could ask about as many sequences as they wanted to and when they were confident they were to write their guess down (I limited the exercise to five minutes for practical purposes and everyone had written down an answer by then).
The answer was: Any three numbers in ascending order. No students in the group got the right answer.
I then used the exercise to explain a bias called positive bias. First, I noted that only 21% of respondents reached the right answer to this scenario. Then I pointed out that the interesting point isn't this figure but rather why so few people reach the right answer. Specifically, people think to test positive, rather than negative, cases. In other words, they're more likely to test cases that their theory predicts will occur (in this case, those that get a yes answer) then cases that their theory predicts won't. So if someone's initial theory was that the rule was, "three numbers, each two higher than the previous one" then they might test "10, 12, 14" as this is a positive case for their theory. On the other hand, they probably wouldn't test "10, 14, 12" or "10, 13, 14" as these are negative cases for their prediction of the rule.
This demonstrates positive bias - the bias toward thinking to test positive, rather then negative, cases for their theory (see here for previous discussion of the 2, 4, 6 task on Less Wrong).
Puzzles like this allow the student to live the bias and not just consider it on an abstract level.
4. Conclusion
In teaching people about biases we should be trying to make them live biases, rather than just thinking about them. Rationality puzzles offer one of the best ways to achieve this.
Of course, for any individual puzzle, some people will get the right answer. With the 2, 4, 6 puzzle in particular, a number of us have found people perform better on this task in casual, rather than formal, settings. The best way to deal with this is to present a series of puzzles that reveal a variety of different biases. Most people will reach the wrong answer in at least one puzzle.
5. More rationality puzzles
Bill the accountant and the conjunction fallacy
World War II and Selection Effects (not quite a puzzle yet, but it feels like it could be made into one)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)