The question of whether trying to consistently adopt meta-reasoning position A will raise the percentage of time you're correct, compared with meta-reasoning position B, is often a difficult one.
When someone uses a disliked heuristic to produce a wrong result, the temptation is to pronounce the heuristic "toxic". When someone uses a favored heuristic to produce a wrong result, the temptation is to shrug and say "there is no safe harbor for a rationalist" or "such a person is biased, stupid, and beyond help; he would have gotten to the wrong conclusion anyway, no matter what his meta-reasoning position was. The idiot reasoner, rather than my beautiful heuristic, has to be discarded." In the absence of hard data, consensus seems difficult; the problem is exacerbated when a novel meta-reasoning argument is brought up in the middle of a debate on a separate disagreement, in which case the opposing sides have even more temptation to "dig in" to separate meta-reasoning positions.
Phaecrinon: But even an Inside View of writing a textbook would tell you that the project was unlikely to destroy the Earth.
Eric Drexler might have something to say about that, along with one or two twentieth century physicists.
Good post nonetheless :)
The implied disagreement here between the "inside view" of "outside views" (i.e. a limited domain) and the "outside view" of "outside views" (i.e. something that applies in general) is the same as Eliezer's disagreement with Robin about the meaning of Aumann.
If Robin is right, then Eliezer is against overcoming bias in principle, since this would be taking an outside view (according to Robin's understanding). Of course, if Eliezer is right, it just means that Robin is biased against inside views. Each of these consequences is very strange; if Robin is right, Eliezer is in favor of bias despite posting on a blog on overcoming bias, while if Eliezer is right, Robin is biased against his own positions, among other things.
Unknown: If Robin is right, then Eliezer is against overcoming bias in principle, since this would be taking an outside view (according to Robin's understanding)
I thought that overcoming bias was about reaching true beliefs, not adhering to some particular ritual like using outside views.
Bayes' Theorem says that P(H|DI) is proportional to p(H|I) * p(D|HI), where D is the data, H is the hypothesis, and I is our background information. A common mistaken intuition is that P(H|DI) = p(D|HI). The "outside view" seems to be that p(H|DI) = p(H|I). This is just plain wrong. As Eli states above, by varying your choice of D' and I' such that D'I' = DI, you can make p(H|D'I') equal to all sorts of things by using the outside view heuristic. Getting good predictions from the outside view requires you to have good intuitions about what to use as D and what to use as I.
Having useful approximations is great. Turning a useful approximation into your new definition of rationality is a bad, bad idea. You say that Robin thinks that overcoming bias means taking outside views. I think Robin knows better than that.
Peter de Blanc: see http://www.overcomingbias.com/2007/07/beware-the-insi.html, posted by Robin Hanson. In particular : "Most, perhaps all, ways to overcome bias seem like this. In the language of Kahneman and Lovallo's classic '93 paper, we allow an outside view to overrule an inside view... If overcoming bias comes down to having an outside view overrule an inside view, then our questions become: what are valid outside views, and what will motivate us to apply them?"
What do you think this means, if not that overcoming bias means taking outside views?
There are a variety of things we can take the inside view or outside view with regards to AI.
The second question seems appropriate for the outside view, the first less so in my view. The outside answer to the second question seems to be very little.
Inside and outside views are different levels of description of the problem, and reasoning always happens on many levels, with analogies drawn between levels and with feedback between levels. The failure of rationality occurs with particular use of representation where you have your inbuilt overconfidence pull in the wrong direction. By shifting to a different level of description that doesn't include the faulty parts, you restore rationality of the conclusion. In other cases, you don't need to do that, and the success depends primarily on having sufficient information about the domain and correct estimates of similarities of the parts of the problems, on all considered levels of description.
Are there no social scientists left reading this blog anymore, to comment on the implicit accusation that analyses of social transitions are just "surface" analogies no more trustworthy that Plato's analogy of death to sleep?
If we are trading in analogies, then I'd liken Robin's argument to noticing that the shoe size of the last four Miss World winners decreased each year, and then predicting the shoe size of next year's winner on that basis. Also it turns out that his measure of shoe size is based on measuring the shoe size of other members of the participant's country - and that he skipped over the 2006 competition for some reason.
Are the judges really preferring smaller feet? Maybe, but the evidence seems rather tenuous.
Robin, questioning the analogy from biology to postbiology, and questioning the extension of a trend in interest rates over the leap to transistor-based thought, is not the same as dismissing the whole of economics!
Eliezer, you describe the economics concepts I use to compare past and future as "surface analogies" and not "deep causes" and give that as your reason for not thinking much of the analysis. That is surely some flavor of dismissal of those concepts.
In accordance with the conjunction rule of probability theory, a deep principle times a surface analogy equals a surface analogy. The principles you've described are deep for analyzing human economies, it's the analogy over to the posthuman side that I have trouble with.
I'll see if I can write a post addressing this specific topic later today, though it's going to be way out of order in the sequence.
May I suggest that Plato's words carry some different and non-obvious sensibility, that has little to do with the outside vs inside, if we take the original text and the circumstances into account? For, in that age, people had fewer reasons to believe the physicality of the individual. They saw dead people remain dead, but that's pretty much all of it. And they had more motives to believe in the soul, because there's no scientific transhumanism, and religion was their only hope of personal immortality. So the introspecting self may feel that just as it has slept and awaken, and remained itself, so it is possible to survive an abscence of consciousness, and death must be temporary. Not because they superficially looked and sounded alike, but because of the common factor of lack of consciousness, which seemed much less distinguishable than they do now in light of neuroscience. It's not outside vs inside, it might as well have been Phaecrinon thinking "the two pairs are structually different" and Plato thinking "they are equivalent and symmetrical".
Plato has dismissed his share of strawmen opponents, and I have no problem with adapting his words, but I feel confused by this post's focus on the principles of thinking when this more obvious reaction to the analogy comes to mind. How about choosing a purer example next time?
Followup to: The Planning Fallacy
Plato's Phaedo:
Now suppose that the foil in the dialogue had objected a bit more strongly, and also that Plato himself had known about the standard research on the Inside View vs. Outside View...
(As I disapprove of Plato's use of Socrates as his character mouthpiece, I shall let one of the characters be Plato; and the other... let's call him "Phaecrinon".)