LessWrong user dynomight explains how arithmetic is an underrated world-modeling technology and uses dimensional algebra as the motivating case. I agree dimensional algebra is fantastic, but there’s an even better motivating example for arithmetic in world-modeling: linear models for prediction.

Simple linear models outperform experts

In 1954, Paul Meehl published what he later came to call my disturbing little book. This book[1] contains the most important and well-replicated research I know of; yet most people don’t know about it. The basic argument is that many real-world phenomena – even fickle ones – can be adequately modeled with addition and multiplication.

Tempered by the lack of evidence at the time, the book doesn’t go quite as far as it could have. Here are some statements that have later turned out to be true, given in order of increasing outrageousness.

  • Simple linear models outperform experts.
  • Simple linear models outperform experts when the experts get access to additional information that the model does not get.
  • Simple linear models outperform experts when the experts also get the know and use the outcome of the linear model.
  • Simple linear models trained only on experts’ judgments and not the actual outcome outperform the very experts they were trained on.
  • Simple linear models with random weights outperform experts.
  • Simple linear models with equal weights (i.e. a tallying heuristic) outperform experts.
  • Simple linear models with equal weights when limited to three predictors still outperform experts.

Obviously, these are phrased to provoke, and take additional nuance to be fully understood, but the general theme remains the same: addition and multiplication take you surprisingly far.

Predictive modeling is more important than explainatory modeling

There are two reasons to make models: predictive and explainatory.[2]

  • Predictive modeling. Figure out how things are correlated: if X has been observed, does that generally happen when Y is more or less likely to happen?
  • Explainatory modeling. Figure out how things are causally linked. When X happens, does that trigger Y to also happen?

Most people reason about the world through explainatory modeling, and are uncomfortable with predictive modeling. I would argue that predictive modeling is the better approach.

Predictive modeling has the funny property that it is acausal and atemporal: we can predict the risk of someone being in an accident when they are driving drunk, but we can also predict the probability that someone was drunk when we have observed an accident. I think this is what makes people step away from predictive models. Once we are using consequences to predict antecedents, we are flaunting our ignorance of all the nice logic the Greeks came up with – even though it was a while since Bayes and Laplace taught us this is fine.[3]

Predictive modeling doesn’t really care what was the cause, or what came first. It is all about figuring out which observations tend to come together.

In fact, I’d go further and argue that explainatory modeling is just a mistaken approach to predictive modeling. Why do we want to understand how things work? To make better decisions. But we don’t really need to understand how things work to make better decisions, we just need to know how things will react to our what we do. Knowing how they work can be an aid in that, but when it is, predictive modeling will pick up also on causal factors.

What makes predictive modeling a better idea is that it also allows us to find factors that are not causal, but still useful.[4]

In defense of explainatory modeling

Something Pearl emphasises in his book on causal inference[5] is one of the primary strengths of causal reasoning: the stability of the relationships it uncovers.[6]

Using drunk driving as an example, it would be reasonable to believe that drunk driving does not cause accidents at all, it is just that people who drive under the influence tend to also behave recklessly in other ways, and this is what causes accidents. But we can test that: we can get randomly selected people drunk and then put them in a driving simulator, and note whether the drunk group has a higher accident rate. They do, because drunk driving is causally associated with accidents, meaning the relationship is stable even when we control for other factors like personality type.

Predictive models are not guaranteed to be stable like causal models. What we need to do with predictive modeling is find out under which conditions the models hold, and find alternative models for when those conditions are violated. The upshot is that predictive models are much easier to construct, meaning we can take advantage of modeling for a much lower cost.

When arithmetic beats expertise

Meehl lists several comparisons of expertise and arithmetic in his disturbing little book. I strongly encourage you to read the book for more motivation and evidence, but here are some representative examples:

  • Linear regression on just the two variables (1) high school percentile rank and (2) score on college aptitude test predicted gpa just as well as doctorate, well-trained student counselors, who had access to way more information (interview notes, additional tests, individual record form, etc.)
  • Linear regression with somewhat arbitrary weights on a 30-item questionnaire predicted improvements in schizophrenics much better than a team of psychiatrists discussing the same cases. The psychiatrists got on average 44 % correct, whereas the linear regression achieved 81 %.
  • Tallying up 15 factors known to correlate with criminal behaviour predicted criminal recidivism just as well as a prison medical officer.
  • Test scores of nearly 40,000 naval enlisted correlated better with their training success than the evaluations of interviewers – who had the same test scores in front of them.

Note that even if the expert seems to perform on the same level as arithmetic in some of these examples, the expert often refused to predict on difficult cases. If we assume their refusal to predict is a middle-of-the-range prediction, the expert performance becomes worse than the linear model.

But this does not mean we can go without experts. To run a linear model, we need data. Experts are good at extracting data from complicated situations. Another example Meehl brings up in the book is how movements in family casework were roughly as well predicted by a five-variable linear regression as by experts. What’s curious about this are the variables used:

These five factors were (1) referral source, (2) problem area, (3) insight, (4) resistance, and (5) degree to which client was overwhelmed. It is evident that the rating on these five variables as exhibited in the records of the initial contact already involves some considerable degree of clinical judgment by the skilled case reader.

Experts are good at taking complicated qualitative data and distilling it down to a quantitative range.[7]

A few years later[8], Meehl described the current-until-then state of the research:

As of today, there are 27 empirical studies in the literature which make some meaningful comparison between the predictive success of the clinician and the statistician. […] Of these 27 studies, 17 show a definite superiority for the statistical method; 10 show the methods to be of about equal efficiency; none of them show the clinician predicting better.

There has been a lot of research on this since the 1950s, but to close this section in this article, we’ll briefly summarise a meta-study[9] performed in 2000. The authors surveyed 136 comparisons between expert judgment and arithmetical data combination, on subjects like

  • advertising sales,
  • coupon redemption,
  • military success,
  • heart disease,
  • job performance,
  • business failure,
  • suicide attempt,
  • game outcome,
  • homosexuality, and
  • marital satisfaction.

They conclude that arithmetically combining data is on average 10 % more accurate than expert judgment. The 136 comparisons break down into the following three cases.

Arithmetic better than experts40 %
Arithmetic and experts equal49 %
Experts better than arithmetic11 %

Across the board, the experts did not perform better with more experience in the field, nor did they perform better with more data. In fact, there are few things as dangerous as an expert with access to open-ended data that can be interpreted wildly (like a clinical interview.)

There are three reasons arithmetic wins even though it is often equal to expert judgment:

  • Using arithmetic for prediction lets experts attend to more important things[10],
  • A lower-salaried person can evaluate arithmetic, and
  • Arithmetic is faster than expert judgment.

I know what the reader thinks to themselves at this point:

Yeah, that is all well and good for the areas surveyed in the meta-study. But in this specific situation I have in mind, my judgment is much more nuanced and accurate than an arithmetic model.

It is worth emphasising that the meta-study covered a wide range of fields, and the authors did not find any systematic indicator of when expert performance is better than arithmetic. This means if we assume our judgment is better than arithmetic we are betting that our case randomly falls into that 11 % bucket. We are playing Russian roulette with five chambers loaded and one empty!

It would be sensible to do so if expert jugment was much cheaper than arithmetic, but it’s not. In his disturbing little book, Meehl puts this particularly well.

It is apparently hard for some experts to assimilate the kind of thinking represented here – chiefly, I gather, because they cannot help concentrating on the unfortunate case that would have been handled better had we followed the advice of super-expert X, in defiance of the arithmetic. But what such objectors do not see is that in order to save that case, they must lay down and abide by a decision-policy which will misclassify some other case by defying the statistics.

We cannot choose policies based on the effect they have on individual cases, because the choice of individual case implies conflicting decisions. When we choose policy, we must do it based on the aggregate effects of that policy, even when it has known, heart-breaking consequences in individual cases.

Improper linear models are still better than experts

To see the full extent to which arithmetic outperforms expertise, we will turn to Dawes[11], who breaks up expert judgment into components and estimates the value contributed by each component.

We pretend an expert prediction is composed of three actions:

  • The expert selects what to observe and guesses which direction holds for the relationship between the observation and the outcome. (E.g. are we more or less likely to be having a subtle incident when we have made a deployment in the past four hours?)
  • The expert assigns weights to the observations, indicating their relative importance to each other and the outcome. (E.g. memory usage seems normal, but is that a more or less important predictor than the recent deployment?)
  • Finally, the expert adds some noise to their prediction, by considering individual variables that aren’t systematically related to the outcome but sound meaningful to a human suffering from narrative biases. (E.g. they know that mostly good developers have worked on the latest release, so clearly the risk of incident is very low.)

The expert does not know these are the things they do – the expert just looks at a bunch of data and makes a prediction based on their gut feel. But we can use this model to dive into why expert predictions don’t work as well as arithmetic.

Parametric representation removes noise

Parametric representation is a way to average out the noise component from expert judgment. It works by asking experts to predict based on randomly generated data, and then training a linear model on the expert predictions. Since the training data does not contain outcomes, we are actually figuring out the shape of the mathematical function in the experts’ heads. We are using the experts to select predictors, determine their direction and relative weight, but then the linear regression will average away the noise of their predictions. What we find is that this linear model, trained on expert predictions only, outperform experts. The conclusion is simple: the noise experts add is not useful.

Experts do not sensibly assign weights

What about assigning weights? How much value does the expert add to that activity? We can find out by comparing the parametric representation with the same model except with weights assigned randomly. Both of those models perform on a roughly equal level, which indicates that experts, on average, select weights randomly.

Experts are good at selecting variables

These results are based on a small set of studies, and thus cannot be considered conclusive. What they do is waggle their eyebrows suggestively in the direction of the tree Dawes’ is barking up:

  • Experts are really good at selecting which variables we should look at, and at telling us whether they support or oppose the outcome.
  • Experts are not good at combining these variables into a final prediction, even discounting their noisiness.

Just count it. What? Something. Anything!

Given the strong performance of optimally-weighted linear regression over even expert-assigned weights, we might find ourselves in a predicament: finding optimal weights requires knowledge of the outcome, and we often can’t measure the outcome – either because we don’t know how to, or because we lack historic data.

Under those circumstances, we can still do better than random weights: unit weights. Linear models are surprisingly powerful even with unit weights, and this is a good thing because it’s easy to find situations where estimated weights are unstable:

  • The previous example of parametric representation yields unstable weights (since they are effectively selected at random);
  • In multidimensional problems (where we have many potential predictors) we need a ridiculous sample size to get stable weights

In these situations, setting all weights to be equal yields higher predictive accuracy than both random weights and attempting to estimate optimal weights.[12]

One funny consequence of the optimality of unit-weighted models is we can do this also when we are unable to measure the outcome we are trying to optimise for, such as in the case of hiring in small organisations. We don’t know exactly what job success looks like, but we have a decent idea of which factors contribute to it, so we can measure our candidates on those variables, and then combine those measurements with equal weights.

As Dawes snappily puts it:

The whole trick is to decide what variables to look at and then know how to add.

Although it does get into dangerous territory, this can even be used to overcome measurement problems. Maybe we are looking to hire people who are willing to challenge the consensus, but we don’t know how to measure this willingness. What we can do is find an easily-measured proxy, such as counting the frequency with which the applicant contradicts one of the interviewers. This can happen for any number of reasons which are unrelated to job performance (“Would you like tea or coffee?” “No, thank you. Water is good.”), but if we try to be clever we risk introducing noise. Instead, we can just run the dumb count and combine with equal weights. It may well perform better than expert judgment.

Rationale

The authors of the meta-study referenced above ask themselves why their results were obtained, and speculate that

Humans are susceptible to many errors in clinical judgment. These include ignoring base rates, assigning nonoptimal weights to cues, failure to take into account regression toward the mean, and failure to properly assess covariation. Heuristics such as representativeness (which leads to belief in the law of small numbers) or availability (leading to over-weighting vivid data) can similarly reduce clinicians’ accuracy. Also, clinicians often do not receive adequate feedback on the accuracy of their judgments, which gives them scant opportunity to change maladaptive judgment habits.

Meehl with some coauthors[13] dive a little deeper into reasons arithmetic prediction is so strong:

  • Arithmetic models predict the same result given the same data, every time. Experts disagree even with themselves when re-evaluating cases. As Kahneman has shown, this noise is a great destroyer of predictive accuracy[14]. This may well be the primary reason expert judgment does not measure up to arithmetic.
  • An arithmetic model can have weights derived from historic frequencies, which almost always serve as a good base for predictions of the future.
  • Humans, as opposed to calibrated models, have increased confidence in their predictions when given more data, even if the additional data has no predictive power.
  • When an arithmetic model is calibrated, it is specifically by including feedback from the real-world effects of its predictions. Experts do not, as a rule, seek out any feedback on their calibration.
  • Related to the above, experts can form superstitions around the predictive power of particular observations based on vivid cases in their past, even though these observations may not apply generally beyond that vivid case.
  • Humans tend to confabulate memories that corroborate reality. When an expert is asked to recall what they predicted in a case, they will honestly remember their prediction as aligning more with actual outcomes than it did.
  • Experts tend to overvalue observations based on norms rather than actual predictive power. The example used by Meehl and co-authors is how clinicians that work with criminals might note that they often have a subtle eeg abnormality, and thus conclude that subtle eeg abnormalities predict criminal behaviour. This is an invalid conclusion unless they also know that subtle eeg abnormalities are rare in the larger population[15] – but they don’t, because they only subject criminals to the test!

Arithmetic models also suck, but don’t let that stop you

One of the common objections against arithmetic models I have encountered is that they still suck. They do. You might get away from all the words above that linear models will perform well, but the truth is that generally they don’t. All the above is saying is that linear models perform better than experts, but experts also suck.

The difference is that when we have developed an arithmetic model, we can usually give a number indicating how well it performs, and this is the first time people are faced with how difficult prediction is. The expert may have predicted the same thing worse for a decade, but nobody ever evaluated their track record so closely. They’re an expert! Obviously they know what they are doing, don’t they? So when people are faced with poor predictive power for the first time it is usually in the context of an arithmetic model, and they reject it because “We must be able to do better than that!” Here, I agree with Dawes, and tend to answer, “Really? What makes you think so?”

Many real-world outcomes arise as complicated interactions between a multitude of variables, and they are genuinely very hard to predict. There’s no reason to think we can do better, and when we try (e.g. by hiring an expert), we run a very large risk of just introducing noise which makes the predictions – on aggregate – worse, regardless of the effect on individual cases.

  1. ^

    Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the Evidence; Meehl; University of Minnesota Press; 1954.

  2. ^

    Meehl calls these discriminative and structural, but I think predictive and explainatory is the more modern terminology.

  3. ^

    Even Fisher was suspicious of the “method of inverse probability”, and tried – but failed – to replace it with something better.

  4. ^

    A typical case would be consequences of a common antecedent: high intelligence results in good grades, but is also associated with a good grip on language. This means you can predict someone’s elementary school test scores based on the first draft of an essay they wrote in a completely different situation. I know because that was a game I used to play in elementary school.

  5. ^

    Causality: Models, Reasoning, and Inference; Pearl; Cambridge University Press; 2009.

  6. ^

    Another strength of causal modeling is social, namely that people are more ready to accept a causal model.

  7. ^

    Anyone competing in forecasting contests knows this already: a simple way to improve one’s forecast is to break it down into components, estimate the components, and then combine into a final prediction. One reason this helps is that errors in estimating each component cancel out, another reason it helps is through coherence laws relating the estimations of the components to each other.

  8. ^

    When Shall We Use Our Heads Instead of the Formula?; Meehl; Journal of Counseling Psychology; 1957.

  9. ^

    Clinical Versus Mechanical Prediction: A Meta-Analysis; Grove, Zald, Lebow, Snitz, Nelson; Psychological Assessment; 2000.

  10. ^

    One thing Meehl points out experts can do which arithmetic cannot is generate new hypotheses. To predict arithmetically, we need to aim for a criterion, or a fixed set of potential outcomes. This set needs to be conjured out of thin air, and that is something experts do really well.

  11. ^

    The Robust Beauty of Improper Linear Models in Decision Making; Dawes; American Psychologist; 1979.

  12. ^

    Dawes has a footnote on the technical details here, for the interested.

  13. ^

    Clinical Versus Actuarial Judgment; Dawes, Faust, Meehl; Science; 1989.

  14. ^

    Noise: A Flaw in Human Judgment; Kahneman, Sibony, Sunstein; Little, Brown and Company; 2021.

  15. ^

    In case you don’t see why: imagine detaining everyone with a subtle eeg abnormality if it is relatively common in the population. Most of these people will not be criminals, even if every single criminal has an eeg abnormality!

New Comment
5 comments, sorted by Click to highlight new comments since:

I agree with basically everything in the post, and especially that simple linear models are way undervalued. I've also come across cases where experts using literally 100x more data in there models get a worse outcome than other experts because they made a single bad assumption and didn't sanity check it properly. And I've seen cases where someone builds a linear model on the reciprocal of the variable they should have used, or where they didn't realize they were using a linear approximation of an exponential too far from the starting point. Modeling well is itself a skill that requires expertise and judgment. Other times, I see people build a simple linear model, which is built well, and then fail to notice or understand what it's telling them.

There's a Feynmann quote I love about talking simple models seriously:

As they're telling me the conditions of the theorem, I construct something which fits all the conditions. You know, you have a set (one ball)—disjoint (two balls). Then the balls turn colors, grow hairs, or whatever, in my head as they put more conditions on. Finally they state the theorem, which is some dumb thing about the ball which isn't true for my hairy green ball thing, so I say, 'False!'

And a Wittgenstein quote about not thinking enough about what model predictions and observations imply:

“Tell me,” the great twentieth-century philosopher Ludwig Wittgenstein once asked a friend, “why do people always say it was natural for man to assume that the sun went around the Earth rather than that the Earth was rotating?” His friend replied, “Well, obviously because it just looks as though the Sun is going around the Earth.” Wittgenstein responded, “Well, what would it have looked like if it had looked as though the Earth was rotating?”

Literally last week I was at an event listening to an analyst from a major outlet that produces model-based reports that people pay a lot of money for. They were telling an audience of mostly VCs that their projections pretty much ignore the future impact of any technology that isn't far enough along to have hard data. Like for energy, they have projections about nuclear, but exclude SMRs, and about hydrogen, but exclude synthetic hydrocarbons. Thankfully most of the room immediately understood (based on conversations I had later in the day) that this meant the model was guaranteed to be wrong in the most important cases, even though it looks like a strong, well-calibrated track record.

The solution to that, of course, is to put all the speculative possibilities in the model, weight them at zero for the modal case, and then do a sensitivity analysis. If your sensitivity analysis shows that simple linear models vary by multiple orders of magnitude in response to small changes in weights, well, that's pretty important. But experts know if they publish models like that, most people will not read the reports carefully. They'll skim, and cherry-pick, and misrepresent what you're saying, and claim you're trying to make yourself unfalsifiable. They'll ignore the conditionality and probabilities of the different outcomes and just hear them all as "Well it could be any of these things." I have definitely been subject to all of those, and at least once (when the error bars were >100x the most likely market size for a technology) chose not to publish the numerical outcomes of my model at all.

To push back a little:

In fact, I’d go further and argue that explainatory modeling is just a mistaken approach to predictive modeling. Why do we want to understand how things work? To make better decisions. But we don’t really need to understand how things work to make better decisions, we just need to know how things will react to our what we do.

The word "react" here is a causal term. To predict how things will "react" we need some sort of causal model.

What makes predictive modeling a better idea is that it also allows us to find factors that are not causal, but still useful.

Usefulness is also a causal notion. X is useful if it causes a good outcome. If X doesn't cause a good outcome, but is merely correlated with it, it isn't useful.

[-]kqr50

Oh, these are good objections. Thanks!

I'm inclined to 180 on the original statements there and instead argue that predictive modelling works because, as Pearl says, "no correlation without causation". Then an important step when basing decisions on predictive modelling is verifying that the intervention has not cut off the causal path we depended on for decision-making.

Do you think that would be closer to the truth?

I see this as less of an endorsement of linear models and more of a scathing review of expert performance. 

This. Basically, if your job is to do predictions, and the accuracy of your predictions is not measured, then (at least the prediction part of) your job is bullshit. 

I think that if you compare simple linear models in domains where people actually care about their predictions, the outcome would be different. For example, if simple models predicted stock performance better than experts at investment banks, anyone with a spreadsheet could quickly become rich. There are few if any cases of 'I started with Excel and 1000$, and now I am a billionaire'. Likewise, I would be highly surprised to see a simple linear model outperform Nate Silver or the weather forecast. 

Even predicting chess outcomes from mid-game board configurations is something where I would expect human experts to outperform simple statistical models working on easily quantifiable data (e.g. number of pieces remaining, number of possible moves, being in check, etc).

Neural networks contained in animal brains (which includes human brains) are quite capable of implementing linear models, and such should at least perform equally well when they are properly trained. A wolf pack deciding to chase or not chase some prey has direct evolutionary skin in the game of making their prediction of success as accurate as possible which the average school counselor predicting academic success simply does not have.

--

You touch this a bit in 'In defense of explainatory modeling', but I want to emphasize that uncovering causal relationships and pathways is central to world modelling. Often, we don't want just predictions, we want predictions conditional on interventions. If you don't have that, you will end up trying to cure chickenpox with makeup, as 'visible blisters' is negatively correlated with outcomes. 

Likewise, if we know the causal pathway, we have a much better basis to judge if some finding can be applied to out-of-distribution data. No matter how many anvils you have seen falling, without a causal understanding (e.g. Newtonian mechanics), you will not be able to reliably apply your findings to falling apples or pianos. 
 

Something like this post might also be why AI is so useful today, even conditional on AI being solely being a fancy statistics machine like Logan Zollener says:

https://www.lesswrong.com/posts/9ffiQHYgm7SKpSeuq/could-we-use-current-ai-methods-to-understand-dolphins#BxNbyvsLpLrAvXwjT