You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Open thread, 11-17 August 2014 - Less Wrong Discussion

5 Post author: David_Gerard 11 August 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (268)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 13 August 2014 04:36:13PM *  4 points [-]

In my scheme, what I'm really discussing is the probability distribution of probability estimates for a given statement.

OK, let's rephrase it in the terms of Bayesian hierarchical models. You have a model of event X happening in the future which says that the probability of that event is Y%. Y is a parameter of your model. What you are doing is giving a probability distribution for a parameter of your model (in the general case this distribution can be conditional, which makes it a meta-model, so hierarchical). That's fine, you can do this. In this context the width of the distribution reflects how precise your estimate of the lower-level model parameter is.

The only thing is that for unique events ("will AGI be developed within 30 years") your hierarchical model is not falsifiable. You will get a single realization (the event will either happen or it will not), but you will never get information on the "true" value of your model parameter Y. You will get a single update of your prior to a posterior and that's it.

Is that what you have in mind?

Comment author: iarwain1 13 August 2014 05:08:48PM *  1 point [-]

I think that is what I had in mind, but it sounds from the way you're saying it that this hasn't been discussed as a specific technique for visualizing belief probabilities.

That surprises me since I've found it to be very useful, at least for intuitively getting a handle on my confidence in my own beliefs. When dealing with the question of what probability to assign to belief X, I don't just give it a single probability estimate, and I don't even give it a probability estimate with the qualifier that my confidence in that probability is low/moderate/high. Rather I visualize a graph with (usually) a bell curve peaking at the probability estimate I'd assign and whose width represents my certainty in that estimate. To me that's a lot more nuanced than just saying "50% with low confidence". It has also helped me to communicate to others what my views are for a given belief. I'd also suspect that you can do a lot of interesting things by mathematically manipulating and combining such graphs.

Comment author: Lumifer 13 August 2014 05:19:00PM *  1 point [-]

One problem is that it's turtles all the way down.

What's your confidence in your confidence probability estimate? You can represent that as another probability distribution (or another model, or a set of models). Rinse and repeat.

Another problem is that it's hard to get reasonable estimates for all the curves that you want to mathematically manipulate. Of course you can wave hands and say that a particular curve exactly represents your beliefs and no one can say it ain't so, but fake precision isn't exactly useful.