alexflint comments on "Outside View!" as Conversation-Halter - Less Wrong

49 Post author: Eliezer_Yudkowsky 24 February 2010 05:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (93)

You are viewing a single comment's thread.

Comment author: alexflint 25 February 2010 09:55:50AM 13 points [-]

"inside view" and "outside view" seem misleading labels for things that are actually "bayesian reasoning" and "bayesian reasoning deliberately ignoring some evidence to account for flawed cognitive machinery". The only reason for applying the "outside view" is to compensate for our flawed machinery, so to attack an "inside view", one needs to actually give a reasonable argument that the inside view has fallen prey to bias. This argument should come first, it should not be assumed.

Comment author: RobinHanson 25 February 2010 03:12:19PM 1 point [-]

Obviously the distinction depends on being able to distinguish inside from outside considerations in any particular context. But given such a distinction there is no asymmetry - both views are not full views, but instead focus on their respective considerations.

Comment author: alexflint 25 February 2010 11:09:48PM 2 points [-]

Well an ideal Bayesian would unashamedly use all available evidence. It's only our flawed cognitive machinery that suggests ignoring some evidence might sometimes be beneficial. But the burden of proof should be on the one who suggests that a particular situation warrants throwing away some evidence, rather than on the one who reasons earnestly from all evidence.

Comment author: RobinHanson 26 February 2010 07:22:17PM 3 points [-]

If we are going to have any heuristics that say that some kinds of evidence tend to be overused or underused, we have to be able to talk about sets of evidence that are less than than the total set. The whole point here is to warn people about our evidence that suggests people tend to over-rely on inside evidence relative to outside evidence.

Comment author: alexflint 27 February 2010 10:31:58AM 2 points [-]

Agreed. My objection is to cases where inside view arguments are discounted completely on the basis of experiments that have shown optimism bias among humans, but where it isn't clear that optimism bias actually applies to the subject matter at hand. So my disagreement is about degrees rather than absolutes: How widely can the empirical support for optimism bias be generalized? How much should inside view arguments be discounted? My answers would be, roughly, "not very widely" and "not much outside traditional forecasting situations". I think these are tangible (even empirical) questions and I will try to write a top-level post on this topic.

Comment author: wedrifid 25 February 2010 11:31:13PM 3 points [-]

I don't think ideal Bayesian's use burden of proof either. Who has the burden of proof in demonstrating that burden of proof is required in a particular instance?

Comment author: alexflint 26 February 2010 08:35:19AM 2 points [-]

Occams razor: the more complicated hypothesis acquires a burden of proof

Comment author: Eliezer_Yudkowsky 26 February 2010 10:35:00PM 1 point [-]

In which case there's some specific amount of distinguishing evidence that promotes the hypothesis over the less complicated one, in which case, I suppose, the other would acquire this "burden of proof" of which you speak?

Comment author: alexflint 27 February 2010 10:35:53AM 2 points [-]

Not sure that I understand (I'm not being insolent, I just haven't had my coffee this morning). Claiming that "humans are likely to over-estimate the chance of a hard-takeoff singularity in the next 50 years and should therefore discount inside view arguments on this topic" requires evidence, and I'm not convinced that the standard optimism bias literature applies here. In the absence of such evidence one should accept all arguments on their merits and just do Bayesian updating.

Comment author: jimmy 25 February 2010 09:30:18PM 1 point [-]

What would you call the classic drug testing example where you use the outside view as a prior and update based on the test results?

If the test is sufficiently powerful, it seems like you'd call it using the "inside view" for sure, even though it really uses both, and is a full view.

I think the issue is not that one ignores the outside view when using the inside view- I think it's that in many cases the outside view only makes very weak predictions that are easily dwarfed by the amount of information one has at hand for using the inside view.

In these cases, it only makes sense to believe something close to the outside view if you don't trust your ability to use more information without shooting yourself in the foot- which is alexflint's point.

Comment author: RobinHanson 26 February 2010 07:24:20PM 0 points [-]

I really can't see why a prior would correspond more to an outside view. The issue is not when the evidence arrived, it is more about whether the evidence is based on a track record or reasoning about process details.

Comment author: jimmy 27 February 2010 08:08:55PM 3 points [-]

Well, you can switch around the order in which you update anyway, so that's not really the important part.

My point was that in most cases, the outside view gives a much weaker prediction than the inside view taken at face value. In these cases using both views is pretty much the same as using the inside view by itself, so advocating "use the outside view!" would be better translated as "don't trust yourself to use the inside view!"

Comment author: RobinHanson 02 March 2010 02:23:52AM 0 points [-]

I can't imagine what evidence you think there is for your claim "in most cases, the outside view gives a much weaker prediction."

Comment author: Eliezer_Yudkowsky 02 March 2010 02:28:13AM 3 points [-]

Weakness as in the force of the claim, not how well-supported the claim may be.

Comment author: JGWeissman 02 March 2010 02:56:25AM 1 point [-]

This confuses me. What force of a claim should I feel, that does not come from it being well-supported?

Comment author: Eliezer_Yudkowsky 02 March 2010 03:32:43AM 6 points [-]

Okay, rephrase: Suppose I pull a crazy idea out of my hat and scream "I am 100% confident that every human being on earth will grow a tail in the next five minutes!" Then I am making a very forceful claim, which is not well-supported by the evidence.

The idea is that the outside view generally makes less forceful claims than the inside view - allowing for a wider range of possible outcomes, not being very detailed or precise or claiming a great deal of confidence. If we were to take both outside view and inside view perfectly at face value, giving them equal credence, the sum of the outside view and the inside view would be mostly the inside view. So saying that the sum of the outside view and the inside view equals mostly the outside view must imply that we think the inside view is not to be trusted in the strength it says its claims should have, which is indeed the argument being made.

Comment author: JGWeissman 02 March 2010 03:43:24AM 1 point [-]

Thank you, I understand that much better.