You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Locaha comments on Open thread for January 1-7, 2014 - Less Wrong Discussion

2 Post author: NancyLebovitz 01 January 2014 03:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (142)

You are viewing a single comment's thread.

Comment author: Locaha 01 January 2014 04:10:46PM *  -2 points [-]

People who post probability estimates of anything should explain in details how they arrived at them. Otherwise it should be called not probability estimate but pulling it out of your ass.

Seriously, stuff like "1% FAI success by 2100"? When there is no clear definition of AI in sight? Just stop.

Comment author: [deleted] 03 January 2014 09:52:23AM 7 points [-]
Comment author: RowanE 01 January 2014 05:05:09PM 9 points [-]

All beliefs are probability estimates, although it can be hard to trace how a particular belief got to the degree of confidence it's at, and while it might be a nice norm to have in a perfect world I think it's unreasonable to demand that every time someone expresses how confident or unconfident they are in a belief, they should also clarify the entire precise history of that belief's presence in their mind.

Comment author: jsteinhardt 01 January 2014 10:30:15PM 11 points [-]

All beliefs are probability estimates

Apologies for the curmudgeonliness, but it really bugs me when people say things like this. The actual version of this statement that is true is

All coherent actions can be modeled as arising from beliefs that correspond to probability estimates

which is different and much weaker, as now we can argue about how important coherence is relative to other desiderata. One such desideratum is correspondence to reality, which I believe is Locaha's point above. Personally, I would much rather have incoherent beliefs that correspond to reality than coherent beliefs that do not correspond to reality.

Comment author: Douglas_Knight 04 January 2014 07:53:53PM *  0 points [-]

One such desideratum is correspondence to reality, which I believe is Locaha's point above.

I don't think this belief has much correspondence to reality.

Comment author: passive_fist 01 January 2014 09:16:26PM 4 points [-]

Whenever I post a probability estimate, it is solely for the purpose of making my position more clear, not as something anyone should use to actually update their beliefs. You should always consider probability estimates as rough information about how the person who made the estimate thinks, not as a factual bit of information about the prediction itself.

Comment author: Locaha 02 January 2014 06:05:32AM -1 points [-]

I don't think you make anything clearer by translating your intuition's unlikely to N%, while my unlikely is M%, where M!=N. You just make a false impression of having done a calculation (which, unlike intuition, can be confirmed).

Comment author: gjm 02 January 2014 12:25:01PM *  6 points [-]

Suppose passive_fist translates "unlikely" as 2% and Locaha translates "unlikely" as 12%. This could mean either of two things (or some combination of them). (1) passive_fist applies the word "unlikely" to things that feel more unlikely, corresponding to lower probability estimates when forced to quantify. (2) Both actually think much the same about the event in question, as shown by their use of the same word, but they have quite different processes (at least one of them very inaccurate) for translating those thoughts into numbers.

In case 1, quantifying helps to clarify that the two people involved mean quite different things by "unlikely". There may be a lot of fuzziness about the numbers, but once we have them we can see that passive_fist will likely be much more surprised if something s/he calls "unlikely" happens, than Locaha will be if something s/he calls "unlikely" happens.

In case 2, quantifying just adds confusion and error.

I would expect that (especially for analytical quantitative types like most of LW's readership) the truth is something like this. We think, mostly, in fuzzy terms that don't correspond directly either to numbers or to words. There will be some region of subjective likelihood-feeling space that corresponds (e.g.) to the number 2% or 12%. There will be some region that corresponds (e.g.) to the word "unlikely". These correspondences will all work differently for different people, but (a) there will generally be more consistency between one person's "10%" and another's than between one person's "unlikely" and another's, and (b) the finer-grained information you get by asking for probability estimates does have some value, provided you've wit enough not to imagine that everything expressed numerically is known accurately.

[EDITED to fix formatting screwup.]

Comment author: [deleted] 03 January 2014 09:53:45AM 1 point [-]

Plus, some people here use stuff like PredictionBook to check whether the intuition they call "10%" is actually correct 10% of the time.

Comment author: ChristianKl 01 January 2014 05:08:12PM 2 points [-]

People who post probability estimates of anything should explain in details how they arrived at them. Otherwise it should be called not probability estimate but pulling it out of your ass.

Probabilities are useful for being precise about the claims that you are making. There no reason why one shouldn't be precise about the claim one is making even when one doesn't use a formal method to arrive at them.

Comment author: kalium 01 January 2014 05:19:57PM 3 points [-]

If you don't use a precise method to arrive at your claim, you have no business making a precise claim. Remember significant figures from high school chemistry? Same principle.

Comment author: gjm 01 January 2014 06:36:28PM 9 points [-]

I think this is an error. (And so are "significant figures" as commonly used.) 2.4 +- 2 and 2.0 +- 2 are quite different estimates even though you wouldn't (according to conventional wisdom) be justified in giving more than one "significant figure" for either.

Using the number of digits you quote to indicate how accurately you think you know the figure as well as to say what the number is is a hack. It's a convenient hack sometimes, but that's all it is. Everyone knows not to round intermediate results even when starting with low-precision numbers. Well, your final result might be used by someone else as an intermediate result in some bigger calculation.

The same goes for probabilities. It is very important to know when your estimate of a probability is very inaccurate -- but that's no reason to refuse to estimate an actual probability. Even if you just pulled it out of your arse: doing that makes it a very unreliable probability estimate but it's still a probability estimate.

Comment author: kalium 02 January 2014 12:02:56AM 1 point [-]

I won't deny that significant figures are a crap implementation of the principle I'm talking about. But you have to propagate the uncertainty and include it, in some way, in your final answer, either numerically or via some explanation that might let me figure out how precise your answer is.

Don't say "1% probability of FAI success by 2100." Say ".01-10% probability of FAI success by 100 based on XYZ." Or if there's no numerical process behind it that can support even a range like that, just say "FAI success by 2100 seems unlikely."

Comment author: gjm 02 January 2014 12:33:35AM 1 point [-]

Agreed. Though in the latter case you might still do best to give numbers: "Somewhere around 1%, but this is a wild guess so don't take it too seriously." This is not the same statement as the corresponding one with 2% instead of 1%, even though both might be reasonably accurately paraphrased as "unlikely" or even "very unlikely".

Comment author: cousin_it 01 January 2014 09:10:43PM 2 points [-]

Shalizi had a nice post about that.

Comment author: kalium 03 January 2014 08:47:15PM *  0 points [-]

But the "50% probability of Situation A (2% probability of FAI in 100 years) and 50% probability of Situation B (0% probability of FAI in 100 years)" is much more informative to the reader than "1% probability of FAI in 100 years." It exposes more about which parts of the estimate are pulled out of the writer's ass. If I know something the writer doesn't about any one of these component probabilities, I can update my own beliefs, or discuss the estimate, more usefully this way than if I'm just given a flat "1%."

Comment author: cousin_it 03 January 2014 09:02:34PM 0 points [-]

Anna and Steve had a nice post about that.

Comment author: ChristianKl 01 January 2014 05:58:32PM *  2 points [-]

If you don't use a precise method to arrive at your claim, you have no business making a precise claim. Remember significant figures from high school chemistry? Same principle.

That assumes that someone isn't calibrated. If someone calibrates his intuition via frequent usage of prediction book and by always thinking in terms of probability he might be able to make precise claims without following a precise method.

If someone would claim "1.21% chance of FAI" success by 2100 I would agree with you that the person didn't learn the lesson about significant figures from high school chemistry. I don't the that issue with someone claiming 1% chance.

If you want to get calibrated it's also useful to start putting numbers on a lot of likelihoods that you think about, even if the precision is sometimes to high. It allows you to be wrong and that's good for learning.

Comment author: jsteinhardt 01 January 2014 06:16:59PM 2 points [-]

I think it's likely that calibration is domain-specific, so I'm not sure I buy this unless the calibration has occurred in the same domain, which is rare/impossible for the domains we're talking about.

Comment author: ChristianKl 01 January 2014 06:28:07PM 0 points [-]

I think it's likely that calibration is domain-specific, so I'm not sure I buy this unless the calibration has occurred in the same domain, which is rare/impossible for the domains we're talking about.

I think you can argue that the probability is inherently unknowable but I don't see how a detailed process is much better than an intuitive process.

It's very useful to have a mental ability to distinguish between 0.01, 0.001 and 0.0001 when it comes to thinking about XRisk events. I don't think that it's a good practice to call all of those events unlikely and avoiding to make semantic distinctions between them.

Comment author: Locaha 01 January 2014 07:19:36PM *  -1 points [-]

It's very useful to have a mental ability to distinguish between 0.01, 0.001 and 0.0001 when it comes to thinking about XRisk events. I don't think that it's a good practice to call all of those events unlikely and avoiding to make semantic distinctions between them.

But how do you arrive at them? Intuition doesn't deal with 0.01 and 0.00001. Intuition deals with vague notions of likely and unlikely, which also change depends on what you ate for lunch and the phase of the moon. IOW, your intuition is useless to me unless I can confirm it myself. (But then it's not intuition anymore.)

Comment author: [deleted] 03 January 2014 09:58:58AM *  2 points [-]

Intuition doesn't deal with 0.01 and 0.00001.

If there's a 0.01 chance that something happens tomorrow, then if everything stays the same you'd expect that thing to happen about three or four times this year, whereas if it's 0.00001 you'd be quite surprised if it ever happens (EDIT: during your lifetime, assuming no cryonics/antiagathics/uploads). (Of course with stuff like x-risk intuition will be much less reliable.)

Comment author: ChristianKl 01 January 2014 07:30:07PM 2 points [-]

But how do you arrive at them? Intuition doesn't deal with 0.01 and 0.00001. Intuition deals with vague notions of likely and unlikely, which also change depends on what you ate for lunch and the phase of the moon.

I think there are plenty of cases where I can give you a intuitive answer that won't change from 0.01 to 0.00001 depending on what I ate for lunch.

The chance that I die in the next year is higher than 0.00001 but lower than 0.01.

If you don't have an intuition that allows you to do so, I think it's because you don't have enough exposure to people making distinctions between 0.01 and 0.00001.

Comment deleted 01 January 2014 06:11:17PM [-]
Comment author: Locaha 01 January 2014 05:54:39PM -1 points [-]

Probabilities are useful for being precise about the claims that you are making. There no reason why one shouldn't be precise about the claim one is making even when one doesn't use a formal method to arrive at them.

How is the belief of some random person X in some vague-defined event many years into the future is useful for anything nut the research into person X state of mind? Even if it's defined to 1000 significant figures?

Comment author: ChristianKl 01 January 2014 06:09:02PM *  3 points [-]

If you are reading the text of a person who presumably care about that person state of mind and what this person believes. If you don't why do you read the text in the first place?

I do think there a difference between someone thinking an event is unlikely with p=0.2, p=0.01 or p=0.0001. It worthwhile to put a number on the belief to communicate the likelihood.

If people frequently provide likelihoods you can also aggregate the data.

Comment author: NancyLebovitz 07 January 2014 11:01:33AM 1 point [-]

People who post probability estimates of anything should explain in details how they arrived at them.

I recommend operationalizing this by recommending that people ask "Why do you think so?" when they see a probability estimate.