A couple weeks after meeting me, Will Newsome gave me one of the best compliments I’ve ever received. He said: “Luke seems to have two copies of the Take Ideas Seriously gene.”
What did Will mean? To take an idea seriously is “to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded,” as in a Bayesian belief network (see right).
Belief propagation is what happened, for example, when I first encountered that thundering paragraph from I.J. Good (1965):
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind... Thus the first ultraintelligent machine is the last invention that man need ever make.
Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion.
I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely (so long as scientific progress continued). And though I hadn’t read Eliezer on the complexity of value, I had read David Hume and Joshua Greene. So I already understood that an arbitrary artificial intelligence would almost certainly not share our values.
Accepting my belief update about intelligence explosion, I propagated its implications throughout my web of beliefs. I realized that:
- Things can go very wrong, for we live in a world beyond the reach of God.
- Scientific progress can destroy the world.
- Strong technological determinism is true; purely social factors will be swamped by technology.
- Writing about philosophy of religion was not important enough to consume any more of my time.
- My highest-utility actions are either those that work toward reducing AI risk, or those that work toward making lots of money so I can donate to AI risk reduction.
- Moral theory is not idle speculation but an urgent engineering problem.
- Technological utopia is possible, but unlikely.
- The value of information concerning intelligence explosion scenarios is extremely high.
- Rationality is even more important than I already believed it was.
- and more.
I had encountered the I.J. Good paragraph on Less Wrong, so I put my other projects on hold and spent the next month reading almost everything Eliezer had written. I also found articles by Nick Bostrom and Steve Omohundro. I began writing articles for Less Wrong and learning from the community. I applied to Singularity Institute’s Visiting Fellows program and was accepted. I quit my job in L.A., moved to Berkeley, worked my ass off, got hired, and started collecting research related to rationality and intelligence explosion.
My story surprises people because it is unusual. Human brains don’t usually propagate new beliefs so thoroughly.
But this isn’t just another post on taking ideas seriously. Will already offered some ideas on how to propagate beliefs. He also listed some ideas that most people probably aren’t taking seriously enough. My purpose here is to examine one prerequisite of successful belief propagation: actually making sure your beliefs are connected to each other in the first place.
If your beliefs aren’t connected to each other, there may be no paths along which you can propagate a new belief update.
I’m not talking about the problem of free-floating beliefs that don’t control your anticipations. No, I’m talking about “proper” beliefs that require observation, can be updated by evidence, and pay rent in anticipated experiences. The trouble is that even proper beliefs can be inadequately connected to other proper beliefs inside the human mind.
I wrote this post because I'm not sure what the "making sure your beliefs are actually connected in the first place" skill looks like when broken down to the 5-second level.
I was chatting about this with atucker, who told me he noticed that successful businessmen may have this trait more often than others. But what are they doing, at the 5-second level? What are people like Eliezer and Carl doing? How does one engage in the purposeful decompartmentalization of one's own mind?
Part of my day job involves doing design reviews for alterations to relatively complex software systems, often systems that I don't actually understand all that well to begin with. Mostly, what I catch are the integration failures; places where an assumption in one module doesn't line up quite right with the end-to-end system flow, or where an expected capability isn't actually being supported by the system design, etc.
Which isn't quite the same thing as what you're talking about, but has some similarities; being able to push through from an understanding of each piece of the system to an understanding of the expected implications of that idea across the system as a whole.
But thinking about it now, I'm not really sure what that involves, certainly not at the 5-second level.
A few not-fully-formed thoughts: Don't get distracted by details. Look at each piece, figure out its center, draw lines from center to center. Develop a schematic understanding before I try to understand the system in its entirety. If it's too complicated to understand that way, back out and come up with a different way of decomposing the system into "pieces" that results in fewer pieces. Repeat until I have a big-picture skeleton view. Then go back to the beginning, and look at every detail, and for each detail explain how it connects to that skeleton. Not just understand, but explain: write it down, draw a picture, talk it through with someone. That includes initial requirements: explain what each requirement means in terms of that skeleton, and for each such requirement-thread find a matching design-thread.
So, of course someone's going to ask for a concrete example, and I can't think of how to be concrete without actually working through a design review in tedious detail, which I really don't feel like doing. So I recognize that the above isn't really all that useful in and of itself, but maybe it's a place to start.
That's an interesting metaphor.
I wonder if it doesn't actually support compartmentalization. In software, you don't want a lot of different links between the internals of modules, and so it seems you might not want lots of links between different belief clusters. Just make sure they're connected with a clean API.
Of course, that's not really compartmentalization, that's just not drawing any more arrows on your Bayes net than you need to. If your entire religious belief network really hangs on one empirical claim, that might be a sufficient connection between it and the rest of your beliefs, at least for efficiency.