Alternatively, what single concept from statistics would most improve people's interpretations of popular news and daily life events?

New Comment
67 comments, sorted by Click to highlight new comments since:

The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.

Doesn't the word "ALL" make your statement self-contradictory?

The statements being believed in don't have to be on continuums (continui?) for belief in them to be represented as probabilities on a continuum; "I am X% certain that Y is always true".

[-]gjm30

continui?

continua.

My statement itself isn't something I believe with certainty, but adding that qualifier to everything I say would be a pointless hassle, especially for things that I believe with a near-enough certainty that my mind feels it is certain. The part with the "ALL" is itself a part of the statement I believe with near certainty, not a qualifier of the statement I believe. Sorry I didn't make that clearer.

OK, and appropriate when writing on LW. But I wonder if part of the reason most people don't think of "beliefs being probabilities on a continuum" is that even statistically literate people don't usually bother qualifying statements that if taken literally would mean they held some belief with probability 1.

No, it just makes it something other than a belief: an axiom, a game-rule, a definition, a tautology, etc.

It's a belief about beliefs.

That's true, but it's hard to see why that means that it would be a contradiction. It's true that there is a contradiction if you say that all beliefs have a specific mathematical probability of less than one (e.g. including that 1+1=2), since probability theory also assumes that the probability of a mathematical claim is 1. But probability theory isn't supposed to be an exact representation of human beliefs in the first place, but a formalized and idealized representation. In reality we are not always completely certain even of mathematical truths, and this does not cause the existence of a contradiction, because this uncertainty, considered in itself, is not something mathematical.

You could say in the same way that all beliefs are uncertain, including this one, without any contradiction, just as it is not a contradiction to say that all sentences are made of words, including this one.

I interpreted the statement as basically "I am CERTAIN that you can never be certain of anything." I almost didn't post a response because I thought the author might have been deliberately being sarcastic.

This a million times!
How many bias are based on this alone? It's discomforting...

I think a lot about signal detection theory, and I think that's still the best I can come up with for this question. There are false positives, there are false negatives, they are both important to keep in mind, the cost of reducing one is an increase in the other, humans and human systems will always have both.

So, for example, even the most over-generous public welfare system will have deserving people off the dole and even the most stingy system will have undeserving recipients (by whatever definition), so the question (for a welfare system, say) isn't how do we prevent abuse, but how many abusers are we willing to tolerate for every 100 deserving recipients we reject? Also useful in lots of medical discussions, legal discussions, pop science discussions, etc.

The very basics of probability. I'm talking to the level of "there is about a 1 in 6 chance of a reasonably fair dice coming up 3 on a single roll"

I remember a friend telling me about a game some of his classmates played which was basically about calling high/low on the next card dealt.

He'd made a modest and steady income simply calling based on whether it was greater or less than 7 for the first few cards and he was known as being "lucky". They honestly couldn't comprehend something as simple as that.

Absolutely. Not to mention all the "after a string of red, black is more likely" people....and there are a lot out there

"after a string of red, black is more likely" people

Happens to be true for sampling from a finite set without replacement :-P

You got me ;-) I should have specified "at the roulette".....I'm new here, still have to get used to you guys

This reminds me of something I've heard in regard to fixed games in sports.

People have this idea that fixed games are unlikely because it's too big a conspiracy to not be found out. It would be obvious that one team was throwing the game, or that a referee was being unfair.

However, corruption in sports can be pretty simple and hard to notice. For instance, in a basketball game, an official could make the over-under more likely to pay out the over bet just by calling ~10% more fouls in any given game. This could mean blowing the whistle for a foul just 5-7 more times in a 48 minute game, allowing the teams extra free throws, which are high probability opportunities for extra points. Since fouls in basketball are very subjective, it would be very difficult to detect this method of corruption.

More importantly to this discussion, the type of game fixing described above need not be guaranteed to cause the desired outcome in any given game. In fact, it's better for the scheme to be very subtle over the course of many games so as to avoid detection.

If you wager enough money, it would be statistically quite lucrative to push the probability in your favor by just a few percentage points. $1M per game x 82 games in a season x 30 teams x 52% or 53% probability of winning.

It's also a lot harder to detect "point shaving" - winning by less of a margin than expected - than it is to detect someone deliberately choosing to lose a game outright.

That the statement "X causes Y" is almost meaningless without knowing how much.

Bread gives you cancer? Really? OH MY GOD! (small print, only 1 in a [huge number] chance)

But most people seem to only have 3 levels of belief about things.

"X does"," X does not", "X maybe does" and round them to 1,0 and 0.5 respectively.

You'll find yourself having conversations with these people along the lines of

Nutter:"You shouldn't let children do that! it causes cancer!"

You:"there's less than one in a million chance assuming they do this every day of their lives"

Nutter:"SEE! IT CAUSES CANCER! I KNEW IT! YOU MONSTER "

Most people are depressingly thick.

The Law of Truly Large Numbers. And that 1 in a million experiences are actually super common in a world with 7B+ people. I have a background in the sort of Christianity that emphasizes the reality of miracles and apparently unexplained phenomena, so this would likely help sooth that annoyance.

Noise happens. Even if X is predictive of Y, it's rarely perfectly predictive.

For instance, suppose that 1000 students take a math test, then take a different math test that covers the same material with different problems. It is highly likely that their rankings on the two tests will be strongly correlated. It is highly unlikely that their rankings on the two tests will be exactly the same.

And it is quite possible that a few students will do vastly better on one test than the other, due to things that have nothing particularly to do with their mathematical ability. If you give a math test to a sufficiently large student population, then some student's boyfriend will have gotten hit by a car on the morning of the math test. That will probably mess with their scores.

The desire to know error estimates and confidence levels around assertions and figures, or better yet, probability mass curves. And a default attitude of skepticism towards assertions and figures when they are not provided.

That when you've already picked someone/something out from the general population based on a particular property you cannot then use the same criteria to come to conclusions.

If you use a DNA matching technique with a 1 in a million chance of a false positive to pick your suspect out of a large database of people you cannot then use that "1 in a million chance" as part of the evidence against them. Yet courts absolutely would. (doing it the other way round, selecting one person then using the test is perfectly reasonable)

95% of all statistics are made up. It's very easy to make up data or confuse people with bad statistical treatment, but most science reporters / news media don't bother, they just honestly misunderstand the source data instead. If you can't check the statistical technique in detail yourself, and you don't very highly trust the source to do so (hint: news media are almost never trustworthy), you should ignore any statistical claims as being basically uncorrelated with reality.

Yeah, that's exactly what I would say as well.

This may not be strictly statistical, but I would choose the idea that in order to make any meaningful statement with data, you always have to have something to compare it to.

Like someone will always come in some political thread and say , "X will increase/decrease Y by Z%.) And my first thought in response is always, "Is that a lot?"

For a recent example I saw, someone showed a graph of Japanese student suicides as a function of day of the year. There were pretty high spikes (about double the baseline value) on the days corresponding to the first day of each school semester. The poster was attributing this to Japanese school bullying and other problems with Japan's school system.

My first thought was, "wait. Show me that graph for other countries. For the world, if such data has been reliably gathered." If it looks the same, it's not a uniquely Japanese problem. What if it's worse in other countries, even?

Yeah, I'd really like to see people stop using information where it doesn't mean anything in isolation. A lot of people think that controls in science exist to make sure that the effects you see aren't spurious or adventitious. It's not like that's wrong, but it's deeper and even more fundamental than that.

I'm a scientist, so let me give you an example from my research (grossly simiplified and generalized for brevity).

Substance A was designed such that it manifests an as-of-yet unexplored type of structural situation. We then carried out a reaction on substance A to see what some of the effects of this situation are. Something happened.

So, if we were to leave it at that, what would we have learned? Nothing. We need substance B, which does not have that siutation going on but is otherwise as similar to A as we can make it, to see what IT does, to see if it does anything different than A. See, we need to do the experiments on both B and A not to see whether the results of A are 'real'. We need to do it to see what the results even ARE in the first place.

I'd give people the ability to do multiple regressions in their head. Because I want to be able to do multiple regression in my head.

Why do you want to be able to do that? Do you mean that you want to be able to look at a spreadsheet and move around numbers in your head until you know what the parameter estimates are? If you have access to a statistical software package, this would not give you the ability to do anything you couldn't have done otherwise. However, that is obvious, so I am going to assume you are more interested in groking some part of the underlying the epistemic process. But if that is indeed your goal, the ability to do the parameter estimation in your head seems like a very low priority, almost more of a party trick than actually useful.

I think it would be very useful. I have access to software packages, but it takes effort to gather data, type it in, etc. If I could do it in my head--my mind mentally keeping track of observations and updating the parameters as I go through life, for all sorts of questions--does it look like rain today? how energetic do I feel today? -- I'd be building accurate models of everything important in my life. It would be a different level of rationality.

All probabilities are Bayesian, i.e., conditioned on some information I.

I would explain about blocking, how people can be matched up by profession, socio-economic status, smoker or non-smoker, and various other traits, to make comparisons where those factors are assumed to be equal.

Generalized method of moments because "implanting" it in every head would require greatly increasing the intelligence of most of mankind.

I'd like to see people have a clue what a probability actually is. I'm tired of hearing how the weather forecast was "wrong".

I'd like to see people have a clue what a probability actually is.

Heh. It isn't that simple.

What precisely does "There is a 70% chance of rain tomorrow" mean?

What precisely does "There is a 70% chance of rain tomorrow" mean?

If you offered me the choice of two "lottery tickets", one of which paid $30 if it rained tomorrow and one of which paid $70 if it didn't, I wouldn't care which one I took.

[-]gjm00

That surely can't be the right general answer, because the relationship between your attitude to getting $30 and to getting $70 will depend on your wealth now. (And also on your tolerance for risk, but you might be willing to argue that risk aversion is always irrational except in so far as it derives from diminishing marginal utility.)

You could switch from dollars to utilons, but then I think you have a different problem: we don't have direct access to our utility functions, and I think the best techniques for figuring them out depend on probability, which is going to lead to trouble if probabilities are defined in terms of utilities.

My question was about the probability of rain, not about what you would be willing to bet on. Besides, who's that "me", a perfect rational Homo Economicus or a real person? Offering bets to an idealized concept seems like an iffy idea :-)

"Probability of appreciable rainfall" * "fraction of specified area which will receive it" is 0.7.

Or, I guess more properly it should be an integral over possible rainfall patterns. But "70% of London will definitely see lots of rain, and 30% will see none" and "we have 70% credence that all of London will see lots of rain, and 30% credence that no rain will fall in London" would both be reported as 70% chance of rain in London.

https://en.wikipedia.org/wiki/Probability_of_precipitation

"Probability of appreciable rainfall"

And what does that mean?

You just replaced the word probability with credence/chance without explaining what's meant with it on a more basic level. The people where you complain that they don't know what probability means also won't know what credence means.

I was talking about weather forecasts, not trying to explain probability.

I think then you evaded the question Lumifer asked. The original post stated I'd like to see people have a clue what a probability actually is. Then Lumifer asked what it actually is. Explaining weather forcasts is besides the main point.

Yes, I wasn't answering the question as intended. But both kithpendragon and Lumifer were talking about the weather forecast, and it does seem at least vaguely relevant that even if you know exactly what probability is, that's not sufficient to understand "70% chance of rain".

Okay, I might have been to harsh.

[-]mkf20

One possible answer, related to the concept of calibration, is this: it means that it rained in 70% of the cases when you predicted 70% chance of rain.

You've defined your calibration with respect to predicting rain. But I am not interested in calibration, I'm interested in the weather tomorrow. Does the probability of rain tomorrow even exist?

Naively, I would expect it to mean that if you take sufficiently many predictions (i.e. there's one made every day), and you group them by predicted chance (70%, 80%, etc. at e.g. 10% granularity), then in each bin, the proportion of correct predictions should match the bin's assigned chance (e.g. between 75% and 85% for the 80% bin). And so given enough predictions, your expected probability for a single prediction coming true should approach the predicted chance. With more predictions, you can make smaller bins (to within 1%, etc).

So, you're taking the frequentist approach, the probability is the fraction of the times the event happened as n goes to infinity? But tomorrow is unique. It will never repeat again -- n is always equal to 1.

And, as mentioned in another reply, calibration and probability are different things.

But tomorrow is unique. It will never repeat again -- n is always equal to 1.

The prediction is not unique. I group predictions (with some binning of similar-enough predictions), not days. Then if I've seen enough past predictions to be justified that they're well calibrated, I can use the predicted probability as my subjective probability (or a factor of it).

The prediction is not unique.

The trouble with this approach is that it breaks down when we want to describe uncertain events that are unique. The question of who will win the 2016 presidential election is one that we still want to be able to describe with probabilities, even though it doesn't make great sense to aggregate probabilities across different presidential elections.

In order to explain what a single probability means, instead of what calibration means, you need to describe it as a measure of uncertainty. The three main 'correctness' questions then are 1) how well it corresponds to the actual future, 2) how well it corresponds to known clues at the time, and 3) how precisely I'm reporting it.

That's correct: my approach doesn't generalize to unique/rare events. The 'naive' or frequentist approach seems to work for weather predictions, and creates a simple intuition that's easier IMO to explain to laymen than more general approaches.

this doesn't generalize.

What do you mean?

What Vaniver said: my approach breaks down for unique events. Edited for clarity.

What precisely does "There is a 70% chance of rain tomorrow" mean?

It means that the proportion of meteorological models that predict rain to those that don't is 7:3. Take an umbrella. ;)

It means that the proportion of meteorological models that predict rain to those that don't is 7:3

Yeah, that's an old joke, except it's told about meteorologists and not models.

But the question of "what a probability actually is" stands. You are not going to argue that it's a ratio of model outcomes, are you?

Perhaps I could have better phrased the complaint; I wasn't attempting to dive into the philosophical. The point was that the meteorologist is not "wrong" if it rains on a 30% chance or if the high temperature is off by a couple of degrees. Meteorologists deal with a lot of uncertainty (that they don't always communicate to us effectively). People need to understand that a 30% chance of rain only means that it likely won't rain (roughly 2:1 against). Still wouldn't hurt to take an umbrella.

As for the philosophical, I'd have to claim that a Probability is a quantitative expression of predictive uncertainty that exists within an informational system such as the human brain or, yes, weather prediction models. Come to think of it, that might actually be helpful for people to understand the weather report. I just don't trust my coworkers to be able to parse most of those words.

The point was that the meteorologist is not "wrong" if it rains on a 30% chance

Well, is the forecast falsifiable, then? Can it be wrong? How would you know?

Probability is a quantitative expression of predictive uncertainty that exists within an informational system such as the human brain or, yes, weather prediction models.

So the probability exists purely in the map, but not in the territory? I am not sure quantum mechanics would agree.

Is the forecast falsifiable, then? Can it be wrong? How would you know?

Same way you know if other probabilistic prediction systems are "wrong": keep track of accurate and inaccurate predictions, weighted by confidence levels, and develop model of the system's reliability. Unreliable systems are probably "wrong" in some way. Individual predictions that express extreme confidence in an outcome that is not observed are "wrong". But I cannot recall having reason to accuse any meteorologists of either error. (Full disclosure: I don't care enough to make detailed records.)

I would also point out that the audience adds another level down the predictive rabbit hole. Weather forecasts usually predict for a large area. I've observed that weather can be significantly different between Hershey and Harrisburg in Pennsylvania. The two are less than a half-hour apart, and usually have identical forecast conditions. This further confounds the issue by adding the question of who is included in that 30% chance of rain. You could interpret it to mean a high degree of confidence that 30% of the forecast area will see rain. I have not seen an interview with a meteorologist that addressed that particular wrinkle.

So the probability exists purely in the map, but not in the territory? I am not sure quantum mechanics would agree.

Can't speak on quantum mechanics with much authority, but my suspicion is that there's something going on that we haven't yet learned to predict (or maybe don't have direct access to) on a quantum level. I seem to remember that quantum physics predicts more than [3 space + 1 time] dimensions. Since I don't appear to have access to these "extra" dimensions, it seems intuitive that I would be as ineffective at predicting events within them as Flatlanders would be at predicting a game of pool as seen from a single slice perpendicular to the table. They might be able to state a likelihood that (for example) the red circle would appear between times T1 and T2 and between points P1 and P2, but without a view of the plane parallel to the table and intersecting with the balls they would really only be making an educated guess. The uncertainty exists in my mind (as limited by my view), not in the game. I suspect something similar is likely true of Physics, though I'm aware that there are plenty of other theories competing with that one. The fact of multiple competing theories is, in itself, evidence that we are missing some important piece of information.

I expect time will tell.

Same way you know if other probabilistic prediction systems are "wrong"

I asked about a single forecast, not about a prediction system (for which, of course, it's possible to come up with various metrics of accuracy, etc.). Can the forecast of 70% chance of rain tomorrow be wrong, without the quotes? How could you tell without access to the underlying forecasting system?

but my suspicion is that there's something going on that we haven't yet learned to predict

So your position is that reality is entirely deterministic, there is no "probability" at all in the territory?

So your position is that reality is entirely deterministic, there is no "probability" at all in the territory?

I feel that is most likely, yes.

Unfortunately my weather forcast doesn't tell me it's between 10 and 15 degrees with 80% probability but the weather forcast for tomorrow is 12 degrees. As such it makes more sense to say it was wrong.

Certainly it is easier to say it was wrong. Meteorologists actually do see the error bars &c., then they dumb it down so most people can grasp what they're saying. I understand there is ongoing discussion as to what kind of balance is appropriate between being precise and being understandable. Unfortunately, status quo bias seems to be dictating the outcome of that discussion, and much of the information in meteorological models is never provided to the general public as a result.

I think most people would be perfectly able to understand: The temperature is going to be between 10 and 15 degrees instead of the temperature is going to be 12 degrees.

Then the metrologist can use whatever probability he considers to be appropriate.

Unfortunately, status quo bias seems to be dictating the outcome of that discussion

Yes, and the status quo is wrong. It's makes sense to say it's wrong. People in charge really do screw up by staying with the status quo. Making excuses for it doesn't help.

That's especially true today where I get my weather information from Google or from Windows. In both cases it would be easy to provide a UX interface that allows me to see proper statistics about the weather.

Google knows a lot about me. It could even guess that I want proper statistics.

The status quo is certainly wrong when it comes to the presentation of weather related data. The report is badly oversimplified due to several effects including the (over)estimated gap in understanding of statistics between meteorologists and the general public.

A 30% chance of precipitation is not, however, "wrong" if it does in fact rain. It merely expresses a fairly high degree of uncertainty in the claim "it will/won't rain today". The claim that such a report means the meteorologist was wrong (or somehow lying) is the subject of my complaint, not the format of the report itself (which I agree is abysmally deficient).

Do you think I was just dumbing things down is generally a valid excuse when people state that you are making wrong statements?

I think lying does include an attempt at deception which I agree isn't there on the part of meteorologists.

Wait a sec.. didn't we have a thread like this some time ago?

I'd say there is a non-negligible probability.