Imagine a black box which, when you pressed a button, would generate a scientific hypothesis. 50% of its hypotheses are false; 50% are true hypotheses as game-changing and elegant as relativity. Even despite the error rate, it’s easy to see this box would quickly surpass space capsules, da Vinci paintings, and printer ink cartridges to become the most valuable object in the world. Scientific progress on demand, and all you have to do is test some stuff to see if it’s true? I don’t want to devalue experimentalists. They do great work. But it’s appropriate that Einstein is more famous than Eddington. If you took away Eddington, someone else would have tested relativity; the bottleneck is in Einsteins. Einstein-in-a-box at the cost of requiring two Eddingtons per insight is a heck of a deal.
What if the box had only a 10% success rate? A 1% success rate? My guess is: still most valuable object in the world. Even an 0.1% success rate seems pretty good, considering (what if we ask the box for cancer cures, then test them all on lab rats and volunteers?) You have to go pretty low before the box stops being great.
I thought about this after reading this list of geniuses with terrible ideas. Linus Pauling thought Vitamin C cured everything. Isaac Newton spent half his time working on weird Bible codes. Nikola Tesla pursued mad energy beams that couldn’t work. Lynn Margulis revolutionized cell biology by discovering mitochondrial endosymbiosis, but was also a 9-11 truther and doubted HIV caused AIDS. Et cetera. Obviously this should happen. Genius often involves coming up with an outrageous idea contrary to conventional wisdom and pursuing it obsessively despite naysayers. But nobody can have a 100% success rate. People who do this successfully sometimes should also fail at it sometimes, just because they’re the kind of person who attempts it at all. Not everyone fails. Einstein seems to have batted a perfect 1000 (unless you count his support for socialism). But failure shouldn’t surprise us.
Yet aren’t some of these examples unforgiveably bad? Like, seriously Isaac – Bible codes? Well, granted, Newton’s chemical experiments may have exposed him to a little more mercury than can be entirely healthy. But remember: gravity was considered creepy occult pseudoscience by its early enemies. It subjected the earth and the heavens to the same law, which shocked 17th century sensibilities the same way trying to link consciousness and matter would today. It postulated that objects could act on each other through invisible forces at a distance, which was equally outside the contemporaneous Overton Window. Newton’s exceptional genius, his exceptional ability to think outside all relevant boxes, and his exceptionally egregious mistakes are all the same phenomenon (plus or minus a little mercury).
Or think of it a different way. Newton stared at problems that had vexed generations before him, and noticed a subtle pattern everyone else had missed. He must have amazing hypersensitive pattern-matching going on. But people with such hypersensitivity should be most likely to see patterns where they don’t exist. Hence, Bible codes.
These geniuses are like our black boxes: generators of brilliant ideas, plus a certain failure rate. The failures can be easily discarded: physicists were able to take up Newton’s gravity without wasting time on his Bible codes. So we’re right to treat geniuses as valuable in the same way we would treat those boxes as valuable.
This goes not just for geniuses, but for anybody in the idea industry. Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.
I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
I don’t want to take this too far. If someone has 99 stupid ideas and then 1 seemingly good one, obviously this should increase your probability that the seemingly good one is actually flawed in a way you haven’t noticed. If someone has 99 stupid ideas, obviously this should make you less willing to waste time reading their other ideas to see if they are really good. If you want to learn the basics of a field you know nothing about, obviously read a textbook. If you don’t trust your ability to figure out when people are wrong, obviously read someone with a track record of always representing the conventional wisdom correctly. And if you’re a social engineer trying to recommend what other people who are less intelligent than you should read, obviously steer them away from anyone who’s wrong too often. I just worry too many people wear their social engineer hat so often that they forget how to take it off, forget that “intellectual exploration” is a different job than “promote the right opinions about things” and requires different strategies.
But consider the debate over “outrage culture”. Most of this focuses on moral outrage. Some smart person says something we consider evil, and so we stop listening to her or giving her a platform. There are arguments for and against this – at the very least it disincentivizes evil-seeming statements.
But I think there’s a similar phenomenon that gets less attention and is even less defensible – a sort of intellectual outrage culture. “How can you possibly read that guy when he’s said [stupid thing]?” I don’t want to get into defending every weird belief or conspiracy theory that’s ever been [stupid thing]. I just want to say it probably wasn’t as stupid as Bible codes. And yet, Newton.
Some of the people who have most inspired me have been inexcusably wrong on basic issues. But you only need one world-changing revelation to be worth reading.
I think I agree with the thrust of this, but I think the comment section raises caveats that seem important. Scott's acknowledged that there's danger in this, and I hope an updated version would put that in the post.
But also...
This seems like a strange model to use. We don't know, a priori, what % are false. If 50% are obviously false, probably most of the remainder are subtly false. Giving me subtly false arguments is no favor.
Scott doesn't tell, us, in this essay, what Steven Pinker has given him / why Steven Pinker is ruled in. Has Steven Pinker given him valuable insights? How does Scott know they're valuable? (There may have been some implicit context when this was posted. Possibly Scott had recently reviewed a Pinker book.)
Given Anna's example,
I find myself wondering, has Scott checked Pinker's straightforwardly checkable facts?
I wouldn't be surprised if he has. The point of these questions isn't to say that Pinker shouldn't be ruled in, but that the questions need to be asked and answered. And the essay doesn't really acknowledge that that's actually kind of hard. It's even somewhat dismissive; "all you have to do is *test* some stuff to *see if it’s true*?" Well, the Large Hadron Collider cost €7.5 billion. On a less extreme scale, I recently wanted to check some of Robert Ellickson's work; that cost me, I believe, tens of hours. And that was only checking things close to my own specialty. I've done work that could have ruled him out and didn't, but is that enough to say he's ruled in?
So this advice only seems good if you're willing and able to put in the time to find and refute the bad arguments. Not only that, if you actually will put in that time. Not everyone can, not everyone wants to, not everyone will do. (This includes: "if you fact-check something and discover that it's false, the thing doesn't nevertheless propagate through your models influencing your downstream beliefs in ways it shouldn't".)
If you're not going to do that... I don't know. Maybe this is still good advice, but I think that discussion would be a different essay, and my sense is that Scott wasn't actually trying to give that advice here.
In the comments, cousin_it and gjm describe the people who can and will do such work as "angel investors", which seems apt.
I feel like right now, the essay is advising people to be angel investors, and not acknowledging that that's risky if you're not careful, and difficult to do carefully. That feels like an overstep. A more careful version might instead advise:
(e: lightly edited for formatting and content)