Evaluating the RCT is a chance to train the evaluation-muscle in a well-defined domain with feedback. I've generally found that the people who are best at evaluations in RCT'able domains, are better at evaluating the hard-to-evaluate claims as well.
Often the difficult to evaluate domains have ways of getting feedback, but if you're not in the habit of looking for it, you're less likely to find the creative ways to get data.
I think a much more common failure mode within this community, is that we get way overconfident beliefs about hard-to-evaluate domains, because there aren't many feedback loops and we aren't in the habit of looking for them.
Evaluating the RCT is a chance to train the evaluation-muscle in a well-defined domain with feedback. I've generally found that the people who are best at evaluations in RCT'able domains, are better at evaluating the hard-to-evaluate claims as well.
Sounds confounded by general cognitive ability.
Evaluating the RCT is a chance to train the evaluation-muscle in a well-defined domain with feedback.
Yep, and I don't advise people to ignore all RCTs.
I thought about discussing your point when I wrote the OP (along with other advantages of having a community that contains some trivia-collecting), but decided against it, because I suspect EAs and rats tend to misunderstand the nature of this advantage. I suspect most "we need to spend more time on fast-empirical-feedback-loop stuff even if it looks very low-VOI" is rationalizing the mistake described in the OP, rather than actually being about developing this skill.
In particular, if you're just trying to build skill (rather than replacing a hard question with a superficially related easy one), then I think it's often actively bad to build this skill in a domain that's related to the one you care about. EAs and rats IMO should spend more time collecting trivia about physics, botany, and the history of Poland (as opposed to EA topics), insofar as the goal is empiricism skill-building. You're less liable to trick yourself, then, into thinking that the new data points directly bear on the question you're not currently working on.
I think a much more common failure mode within this community, is that we get way overconfident beliefs about hard-to-evaluate domains, because there aren't many feedback loops and we aren't in the habit of looking for them.
Maybe? I think the rationality community is pretty good at reasoning, and I'm not sure I could predict the direction of their error here. With EAs, I have an easier time regularly spotting clear errors, and they seem to cluster in a similar direction (the one described in https://equilibriabook.com/toc).
I agree that rationalists spend more time thinking about hard-to-evaluate domains, and that this makes some failures likelier (while making others less likely). But I also see rats doing lots of deep-dive reviews of random-seeming literatures (and disproportionately reading blogs like ACX that love doing those deep dives), exploring lots of weird and random empirical domains out of curiosity, etc.
It's not clear to me what the optimal level of this is (for purposes of skill-building), or where the status quo falls relative to the optimum. (What percent of LW's AI-alignment-related posts would you replace with physics lit reviews and exercises?)
Good points, but I feel like you're a bit biased against foxes. First of all, they're cute (see diagram). You didn't even mention that they're cute, yet you claim to present a fair and balanced case? Hedgehog hogwash, I say.
Anyway, I think the skills required for forecasting vs model-building are quite different. I'm not a forecaster, but if I were, I would try to read much more and more widely so I'm not blindsided by stuff I didn't even know that I didn't know. Forecasting is caring more about the numbers; model-building is caring more about how the vertices link up, whatever their weights. Model-building is for generating new hypotheses that didn't exist before; forecasting is discriminating between what already exists.
I try to build conceptual models, and afaict I get much more than 80% of the benefit from 20% of the content that's already in my brain. There are some very general patterns I've thought so deeply on that they provide usefwl perspectives on new stuff I learn weekly. I'd rather learn 5 things deeply, and remember sub-patterns so well that they fire whenever I see something slightly similar, compared to 50 things so shallowly that the only time I think about them is when I see the flashcards. Knowledge not pondered upon in the shower is no knowledge at all.
The fox knows many things. The hedgehog knows one important thing.
It turns out that it's optimal to be 3 parts fox to sqrt(2) parts hedgehog:
This was a contorted and biased portrayal of the topic. If you're a reader in a hurry, skip to my last paragraph.
First, this needs clarification on who you mean by a "fox", and who you don't. There's a very high risk of confusion, or talking about unrelated things without noticing. It may help if you name 5 people you consider to be foxes, and 5 you consider to be hedgehogs.
For the rest of this comment, I'm going to restrict "fox" to "good-scoring generalist forecaster", because they would tend to be quite fox-like, in the Tetlockian sense, and you did mention placing probabilities. If there are non-forecasters you would include in your taxonomy for fox, you are welcome to mention them. As an occasional reminder of potential confusion about this, I'll often put "fox" in quotation marks.
Paying more attention to easily-evaluated claims that don't matter much, at the expense of hard-to-evaluate claims that matter a lot.
E.g., maybe there's an RCT that isn't very relevant, but is pretty easily interpreted and is conclusive evidence for some claim. At the same time, maybe there's an informal argument that matters a lot more, but it takes some work to know how much to update on it, and it probably won't be iron-clad evidence regardless
This point has some truth to it, but it misses a lot.
When forecasters pitch ideas for questions, they tend to be interested in whether the question "really captures the spirit of the question". Forecasters are well aware of e.g. Goodhart's Law and measurement issues, it's on our minds all the time and often discussed. We do find it much more meaningful to forecast things that we think matter. The format makes it possible to make progress on that. It happens to take effort.
If a single stream of data (or criterion) doesn't adequately capture it, but if the claim actually corresponds to some future observations in any way, then you can add more questions from other angles. By creating a "basket" from different measures, a progressively-clearer picture can be drawn. That is, if the topic is worth the effort.
An example of this is the accumulated variety of AI-related questions on Metaculus. Earlier attempts were famously unsatisfying, but the topic was important. There is now a FAR better basket of measures from many angles. And I'm sure it will continue to improve, such as by finding new ways to measure "alignment" and its precursors.
It's possible for "foxes" to actually practice this, and make the claim more evaluable. It's a lot of work, which is why most topics don't get this. Also this is still a very niche hobby with limited participation. Prediction markets are literally banned. If they weren't, they'd probably grow like an invasive weed, with questions about all sorts of things.
Although you don't explicitly say hedgehogs do a better job of including and evaluating the hard-to-evaluate claims, this seems intimately related. The people who are better at forecasting than me tend to also be very discerning at other things we can't forecast. In all likelihood these two things are correlated.
I'm most sympathetic to the idea that many topics have inadequate "coverage", in the sense that it's laborious to make things amenable to forecasting. I agree lots of forecasting questions are irrelevant, or in your example, may focus on an RCT too much.
But why you think foxes seem to be worse off in this way, I don't think you really make a case. As far as I can tell, hedgehogs get easily fixated on lots of irrelevant details all the time. The way you describe this seems actively biased, and I'm disappointed that such a prolific poster on the site would have such a bias.
1. A desire for cognitive closure, confidence, and a feeling of "knowing things" — of having authoritative Facts on hand rather than mere Opinions.
[snip]
But real-world humans (even if they think of themselves as aspiring Bayesians) are often uncomfortable with uncertainty. We prefer sharp thresholds, capital-k Knowledge, and a feeling of having solid ground to rest on.
I found this surprising. Hedgehogs are famously more prone to this than foxes. Their discomfort with uncertainty (and desire for authoritative facts) tends to make them bad forecasters.
Granted, forecasters are human too, and we feel more comfortable when certain. And it is true that we use explicit probabilities -- we do that so our beliefs are more transparent, even though it's inconvenient to us. I can see how this relates to fixating on specific information. We even get pretty irate when a question "resolves ambiguous", dashing our efforts like a failed replication.
But hedgehogs tend to be utterly convinced, epistemically slippery, and incredibly opinionated. If you like having authoritative facts and feeling certainty, just be a hedgehog with One Big Idea. And definitely stay the hell away from forecasting.
As above, this point would've been far more informative if you tried making a clear comparison against hedgehogs, and what this tends to look like in them. Surely "foxes" can fixate on a criterion for closure, but how does this actually compare with hedgehogs? Do you actually want to make a genuine comparison?
2. Hyperbolic discounting of intellectual progress.
With unambiguous data, you get a fast sense of progress. With fuzzy arguments, you might end up confident after thinking about it a while, or after reading another nine arguments; but it's a long process, with uncertain rewards.
I don't believe you here. Hedgehogs are free to self-reinforce in whatever direction they want, with certainty, as fast as they want. You know what's a really slow, tedious way to feel intellectual progress? Placing a bunch of forecasts and periodically checking on them. And being forced to tediously check potential arguments to update in various ways, which we're punished for not doing (unlike a hedgehog). It seems far more tedious than sticking to my favorite One Big Idea.
The only way this might be true is that forecasting often focuses on short-term questions, so we can get that feedback, and also because it's much more attainable. Though we do have lots of long-term questions too, we know they're far more difficult and we'll often be dart-throwing chimps. But nothing about your posts seems to really deal with this.
Also a deep point that I might have already told you somewhere else, and seems like a persistent confusion, so I'm going to loudly bold it here:
Forecasters think about and aggregate lots of fuzzy things.
Let me repeat that:
We do this all the time! The substantial difference is we get later scored on whether we evaluated the fuzzy things (and also non-fuzzy-things) properly.
It's compression. If any of those fuzzy things actually make a difference to the observable outcomes, then we actually get scored on whether we did a good job of considering those fuzzy things. "Foxes" do this all the time, probably better than hedgehogs, on average.
I'll elaborate with a concrete example. Suppose I vaguely overhear a nebulous rumor that Ukraine may use a dirty bomb against Russia. I can update my forecast on that, even if I can't directly verify the rumor. Generally you shouldn't update very much on fuzzy things though because they are very prone to being unfounded or incorrect. In that particular example I made a small update, correctly reflecting that it's fuzzy and poorly-substantiated. People actively get better at incorporating fuzzy things as they build a forecasting practice, we're literally scored on how well we do this. Which Rob Bensinger would understand better if he did forecasting.
Hedgehogs are free to use fuzzy things to rationalize whatever they want, with little to slow them down, beyond the (weak and indirect) social checks they'll have on whether they considered those fuzzy things well enough.
3. Social modesty and a desire to look un-arrogant.
It can feel socially low-risk and pleasantly virtuous to be able to say "Oh, I'm not claiming to have good judgment or to be great at reasoning or anything; I'm just deferring to the obvious clear-cut data, and outside of that, I'm totally uncertain."
...To the extent I see "foxes" do this, it was usually a good thing. Also, your wording of "totally uncertain" sounds mildly strawmanny. They don't usually say that. When "outside the data", people are often literally talking about unrelated things without even noticing, but a seasoned forecaster is more likely to notice this. In such cases, they might sometimes say "I'm not sure". Partly out of not knowing what else is being asked exactly, and partly out of genuine uncertainty
This point would be a lot more impactful if you gave examples, so we know you're not exaggerating and this is a real problem.
Collecting isolated facts increases the pool of authoritative claims you can make, while protecting you from having to stick your neck out and have an Opinion on something that will be harder to convince others of, or one that rests on an implicit claim about your judgment.
But in fact it often is better to make small or uncertain updates about extremely important questions, than to collect lots of high-confidence trivia. It keeps your eye on the ball, where you can keep building up confidence over time; and it helps build reasoning skill.
Seriously? Foxes actually make smaller updates more often than hedgehogs do.
Hedgehogs collect facts and increase the pool of authoritative claims they can make, while protecting from having to stick their necks out and risk being wrong. Not looking wrong socially, but being actually-wrong about what happens.
This point seems just wrong-headed, as if you were actively trying to misportray the topic.
High-confidence trivia also often poses a risk: either consciously or unconsciously, you can end up updating about the More Important Questions you really care about, because you're spending all your time thinking about trivia.
Even if you verbally acknowledge that updating from the superficially-related RCT to the question-that-actually-matters would be a non sequitur, there's still a temptation to substitute the one question for the other. Because it's still the Important Question that you actually care about.
Again I appreciate it's very laborious to capture what matters into verifiable question. If there is a particular topic that you think is missing something, please offer suggestions for new ways to capture what you believe is missing. If that thing actually corresponds to reality in some provable way.
Overall I found this post misleading and confused. At several points, I had no idea what you were talking about. I suspect you're doing this because you like (some) hedgehogs, have a vested interest in their continued prestige, and want to rationalize ways that foxes are more misguided. I think this has been a persistent feature of what you've said about this topic, and I don't think it will change.
If anyone wants to learn about this failure mode, from someone who knows what they are talking about, I highly recommend the work of David Manheim. He's an excellent track-recorded forecaster who has done good work on Goodhart's Law, and has thought about how this relates to forecasting.
Edited to slightly change the wording emphasis, de-italicize some things that didn't really need italics, etc.
A common failure mode for people who pride themselves in being foxes (as opposed to hedgehogs):
Paying more attention to easily-evaluated claims that don't matter much, at the expense of hard-to-evaluate claims that matter a lot.
E.g., maybe there's an RCT that isn't very relevant, but is pretty easily interpreted and is conclusive evidence for some claim. At the same time, maybe there's an informal argument that matters a lot more, but it takes some work to know how much to update on it, and it probably won't be iron-clad evidence regardless.
I think people who think of themselves as being "foxes" often spend too much time thinking about the RCT and not enough time thinking about the informal argument, for a few reasons:
1. A desire for cognitive closure, confidence, and a feeling of "knowing things" — of having authoritative Facts on hand rather than mere Opinions.
A proper Bayesian cares about VOI, and assigns probabilities rather than having separate mental buckets for Facts vs. Opinions. If activity A updates you from 50% to 95% confidence in hypothesis H1, and activity B updates you from 50% to 60% confidence in hypothesis H2, then your assessment of whether to do more A-like activities or more B-like activities going forward should normally depend a lot on how useful it is to know about H1 versus H2.
But real-world humans (even if they think of themselves as aspiring Bayesians) are often uncomfortable with uncertainty. We prefer sharp thresholds, capital-k Knowledge, and a feeling of having solid ground to rest on.
2. Hyperbolic discounting of intellectual progress.
With unambiguous data, you get a fast sense of progress. With fuzzy arguments, you might end up confident after thinking about it a while, or after reading another nine arguments; but it's a long process, with uncertain rewards.
3. Social modesty and a desire to look un-arrogant.
It can feel socially low-risk and pleasantly virtuous to be able to say "Oh, I'm not claiming to have good judgment or to be great at reasoning or anything; I'm just deferring to the obvious clear-cut data, and outside of that, I'm totally uncertain."
Collecting isolated facts increases the pool of authoritative claims you can make, while protecting you from having to stick your neck out and have an Opinion on something that will be harder to convince others of, or one that rests on an implicit claim about your judgment.
But in fact it often is better to make small or uncertain updates about extremely important questions, than to collect lots of high-confidence trivia. It keeps your eye on the ball, where you can keep building up confidence over time; and it helps build reasoning skill.
High-confidence trivia also often poses a risk: either consciously or unconsciously, you can end up updating about the More Important Questions you really care about, because you're spending all your time thinking about trivia.
Even if you verbally acknowledge that updating from the superficially-related RCT to the question-that-actually-matters would be a non sequitur, there's still a temptation to substitute the one question for the other. Because it's still the Important Question that you actually care about.