Imagine a black box which, when you pressed a button, would generate a scientific hypothesis. 50% of its hypotheses are false; 50% are true hypotheses as game-changing and elegant as relativity. Even despite the error rate, it’s easy to see this box would quickly surpass space capsules, da Vinci paintings, and printer ink cartridges to become the most valuable object in the world. Scientific progress on demand, and all you have to do is test some stuff to see if it’s true? I don’t want to devalue experimentalists. They do great work. But it’s appropriate that Einstein is more famous than Eddington. If you took away Eddington, someone else would have tested relativity; the bottleneck is in Einsteins. Einstein-in-a-box at the cost of requiring two Eddingtons per insight is a heck of a deal.
What if the box had only a 10% success rate? A 1% success rate? My guess is: still most valuable object in the world. Even an 0.1% success rate seems pretty good, considering (what if we ask the box for cancer cures, then test them all on lab rats and volunteers?) You have to go pretty low before the box stops being great.
I thought about this after reading this list of geniuses with terrible ideas. Linus Pauling thought Vitamin C cured everything. Isaac Newton spent half his time working on weird Bible codes. Nikola Tesla pursued mad energy beams that couldn’t work. Lynn Margulis revolutionized cell biology by discovering mitochondrial endosymbiosis, but was also a 9-11 truther and doubted HIV caused AIDS. Et cetera. Obviously this should happen. Genius often involves coming up with an outrageous idea contrary to conventional wisdom and pursuing it obsessively despite naysayers. But nobody can have a 100% success rate. People who do this successfully sometimes should also fail at it sometimes, just because they’re the kind of person who attempts it at all. Not everyone fails. Einstein seems to have batted a perfect 1000 (unless you count his support for socialism). But failure shouldn’t surprise us.
Yet aren’t some of these examples unforgiveably bad? Like, seriously Isaac – Bible codes? Well, granted, Newton’s chemical experiments may have exposed him to a little more mercury than can be entirely healthy. But remember: gravity was considered creepy occult pseudoscience by its early enemies. It subjected the earth and the heavens to the same law, which shocked 17th century sensibilities the same way trying to link consciousness and matter would today. It postulated that objects could act on each other through invisible forces at a distance, which was equally outside the contemporaneous Overton Window. Newton’s exceptional genius, his exceptional ability to think outside all relevant boxes, and his exceptionally egregious mistakes are all the same phenomenon (plus or minus a little mercury).
Or think of it a different way. Newton stared at problems that had vexed generations before him, and noticed a subtle pattern everyone else had missed. He must have amazing hypersensitive pattern-matching going on. But people with such hypersensitivity should be most likely to see patterns where they don’t exist. Hence, Bible codes.
These geniuses are like our black boxes: generators of brilliant ideas, plus a certain failure rate. The failures can be easily discarded: physicists were able to take up Newton’s gravity without wasting time on his Bible codes. So we’re right to treat geniuses as valuable in the same way we would treat those boxes as valuable.
This goes not just for geniuses, but for anybody in the idea industry. Coming up with a genuinely original idea is a rare skill, much harder than judging ideas is. Somebody who comes up with one good original idea (plus ninety-nine really stupid cringeworthy takes) is a better use of your reading time than somebody who reliably never gets anything too wrong, but never says anything you find new or surprising. Alyssa Vance calls this positive selection – a single good call rules you in – as opposed to negative selection, where a single bad call rules you out. You should practice positive selection for geniuses and other intellectuals.
I think about this every time I hear someone say something like “I lost all respect for Steven Pinker after he said all that stupid stuff about AI”. Your problem was thinking of “respect” as a relevant predicate to apply to Steven Pinker in the first place. Is he your father? Your youth pastor? No? Then why are you worrying about whether or not to “respect” him? Steven Pinker is a black box who occasionally spits out ideas, opinions, and arguments for you to evaluate. If some of them are arguments you wouldn’t have come up with on your own, then he’s doing you a service. If 50% of them are false, then the best-case scenario is that they’re moronically, obviously false, so that you can reject them quickly and get on with your life.
I don’t want to take this too far. If someone has 99 stupid ideas and then 1 seemingly good one, obviously this should increase your probability that the seemingly good one is actually flawed in a way you haven’t noticed. If someone has 99 stupid ideas, obviously this should make you less willing to waste time reading their other ideas to see if they are really good. If you want to learn the basics of a field you know nothing about, obviously read a textbook. If you don’t trust your ability to figure out when people are wrong, obviously read someone with a track record of always representing the conventional wisdom correctly. And if you’re a social engineer trying to recommend what other people who are less intelligent than you should read, obviously steer them away from anyone who’s wrong too often. I just worry too many people wear their social engineer hat so often that they forget how to take it off, forget that “intellectual exploration” is a different job than “promote the right opinions about things” and requires different strategies.
But consider the debate over “outrage culture”. Most of this focuses on moral outrage. Some smart person says something we consider evil, and so we stop listening to her or giving her a platform. There are arguments for and against this – at the very least it disincentivizes evil-seeming statements.
But I think there’s a similar phenomenon that gets less attention and is even less defensible – a sort of intellectual outrage culture. “How can you possibly read that guy when he’s said [stupid thing]?” I don’t want to get into defending every weird belief or conspiracy theory that’s ever been [stupid thing]. I just want to say it probably wasn’t as stupid as Bible codes. And yet, Newton.
Some of the people who have most inspired me have been inexcusably wrong on basic issues. But you only need one world-changing revelation to be worth reading.
Thanks! Here are my reactions/questions:
Seems right to me, as I was never tied to such a narrative in the first place.
What kind of risks is he talking about here? Also does he mean that people value their social positions too much, or that they're not taking enough risks even given their current values?
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more "grandiose" or "prestigious" activities like thinking about philosophy or AI safety, at least for me... I think overall I'm more afraid that someone who could be doing productive "grandiose" work chooses not to in favor of "fiddly puttering", than the reverse.
That seems almost certain to be true, but I don't see evidence that there a big enough effect for me to bother spending the time to investigate further. (I seem to be doing fine without doing any of these things and I'm not sure who is deriving large benefits from them.) Do you want to try to change my mind about this?
Couldn't this just be that we've picked most of the low-hanging fruit, plus the fact that picking the higher fruits require more coordination among larger groups of humans and that is very costly? Or am I just agreeing with Michael here?
This seems quite plausible to me, as I used to lament that a lot of innovations in cryptography never got deployed.
"Doesn't occur to them" seems too strong but I think I know what you mean. Can you give some examples of what these projects are?
Agreed, and I think this is a big problem as far as advancing human rationality because we currently have a very poor theoretical understanding of coalitional strategies.
This seems plausible but what are some examples of such "evil"? What happened to Enron, perhaps?
It would make more sense to me to say that objective reality refers to whatever explains the conserved patterns or relationships between different perspectives, rather than the patterns/relationships themselves. I'm not sure if I'm just missing the point here.
I think I prefer to hold a probability distribution over coherent philosophies, plus a lot of weight on "something we'll figure out in the future".
Also a meta question: Why haven't these been written up or discussed online more? In any case, please don't feel obligated to answer my comments/questions in this thread. You (or others who are familiar with these ideas) can just keep them in mind for when you do want to discuss them online.