You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

AstraSequi comments on [LINK] Common fallacies in probability (when numbers aren't used) - Less Wrong Discussion

7 Post author: Stuart_Armstrong 15 January 2016 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (9)

You are viewing a single comment's thread. Show more comments above.

Comment author: AstraSequi 19 January 2016 12:30:46PM 1 point [-]

Another way to generalize 4 is

Always correct your probability estimates for the possibility that you've made an incorrect assumption.

I don't think "changes the issue" is the best way to say this, because there is always a probability that your model won't work even if it doesn't say something is impossible.

I don't know about this being a category error though. I think "map 1 is accurate with respect to X" is a valid proposition.

Comment author: buybuydandavis 20 January 2016 12:26:24AM *  1 point [-]

Always correct your probability estimates for the possibility that you've made an incorrect assumption.

I think that's good too - Jaynes advocated including a "something else that I didn't think of" hypothesis to your hypothesis to avoid accepting something strongly when all you've done is eliminate the alternatives you've considered.

I don't know about this being a category error though. I think "map 1 is accurate with respect to X" is a valid proposition

"Is accurate" isn't much of a proposition in itself, as it leaves out the level of accuracy.

Probability of a proposition. Propositions are true or false. Level of accuracy of a model. Models are more or less accurate.

Comment author: AstraSequi 20 January 2016 02:31:35AM 0 points [-]

Maybe "Is accurate enough that it doesn't change our answer by an unacceptable amount"? The level of accuracy we want depends on context.

How would you measure the accuracy of a model, other than by its probability of giving accurate answers? "Accurate" depends on what margin of error you accept, or you can define it with increasing penalties for increased divergence from reality.