Also keep in mind that it's more important to make your beliefs as correct as possible then to make them as consistent as possible. Of course the ultimate truth is both correct and consistent; however, it's perfectly possible to make your beliefs less correct by trying to make them more consistent. If you have two beliefs that do a decent job of modeling separate aspects of reality, it's probably a good idea to keep both around, even if they seem to contradict each other. For example, both General Relativity and Quantum Mechanics do a good job modeling (parts of) reality despite being inconsistent and we want to keep both of them. Now think about what happens when a similar situation arises in a field, e.g., biology, psychology, your personal life, where evidence is messier then it is in physics.
Something bothers me about this post. Querying my mind for "what bothers me about this post", the items that come up are:
The general theme appears to be "this post doesn't practice what it preaches".
If you did represent the beliefs you mention in this post in the form of a connected web, what would that look like?
(On a very few prior occasions I've tried such explicit representations, and was not motivated to keep using them.)
I hadn’t noticed that my worldview already implied intelligence explosion.
I'd like to see a post on that worldview. The possibility of an intelligence explosion seems to be an extraordinary belief. What evidence justified a prior strong enough as to be updated on a single paragraph, written in natural language, to the extent that you would afterwards devote your whole life to that possibility?
I’m not talking about the problem of free-floating beliefs that don’t control your anticipations. No, I’m talking about “proper” beliefs that require observation, can be updated by evidence, and pay rent in anticipated experiences.
How do you anticipate your beliefs to pay rent? What kind of evidence could possible convince you that an intelligence explosion is unlikely, how could your beliefs be surprised by data?
What evidence justified a prior strong enough as to be updated on a single paragraph
I can't speak for lukeprog, but I believe that "update" is the wrong word to use here. If we acted like Bayesian updaters then compartmentalization wouldn't be an issue in the first place. I.J. Good's paragraph, rather than providing evidence, seems to have been more like a big sign saying "Look here! This is a place where you're not being very Bayesian!". Such a trigger doesn't need to be written in any kind of formal language - it could have been an offhand comment someone made on a completely different subject. It's simply that (to an honest mind), once attention is drawn to an inconsistency in your own logic, you can't turn back.
That said, lukeprog hasn't actually explained why his existing beliefs strongly implied an intelligence explosion. That wasn't the point of this post, but like you it's a post that I'd very much like to see. I'm interested in trying to build a Bayesian case for or against the intelligence explosion (and other singularity-ish outcomes).
You're right that there's a problem obtaining evidence for or against beliefs about the future. I can think of thre...
The problem is that the utility might be so high (or low) that when you multiply it by this tiny probability you still get something huge.
Don't worry about it; if you decline a Pascal's mugging I'll cause positive utility equal to twice the amount of negative utility you were threatened with, and if you accept one I'll cause negative utility equal to twice what you were threatened with.
Trust me.
This is interesting to me in a sort of tangential way. It seems like studying philosophy exercises this tendency to propagate your beliefs in order to make them coherent. In fact logical belief propagation seems to embody a large aspect of traditional philosophy, so I would expect that on average someone who studies philosophy would have this tendency to a greater degree than someone who doesn't.
It would be interesting to me if anyone has seen any data related to this, because it feels intuitively true that studying philosophy changed my way of thinking, but it's of course difficult to pinpoint exactly how. This seems like a big part of it.
This post made me realize just how important it is to completely integrate the new things you learn.
I have been reading a lot of books and blogs on the subject of students that finish school with honors, but don't seem to work very hard while doing so. I also met one of those people in person (he finished an entire 4 year curriculum with honors in just 3 months and is now a professor of that content)
It all boils to the same thing: Whatever the textbook is trying to tell you, make sure you integrate that in your life. Only then will you see if you really un...
I spent a week looking for counterarguments, to check whether I was missing something
What did you find? Had you missed anything?
Part of my day job involves doing design reviews for alterations to relatively complex software systems, often systems that I don't actually understand all that well to begin with. Mostly, what I catch are the integration failures; places where an assumption in one module doesn't line up quite right with the end-to-end system flow, or where an expected capability isn't actually being supported by the system design, etc.
Which isn't quite the same thing as what you're talking about, but has some similarities; being able to push through from an understanding...
This reminds me of a blog post by Jula Galef I read a couple weeks ago: The Penrose Triangle of Belief .
That's actually a good question. Let me rephrase it to something hopefully clearer:
Compartmentalization is an essential safety mechanism in the human mind; it prevents erroneous far mode beliefs (which we all adopt from time to time) from having disastrous consequences. A man believes he'll go to heaven when he dies. Suicide is prohibited in a patch for the obvious problem, but there's no requirement to make an all-out proactive effort to stay alive. Yet when he gets pneumonia, he gets a prescription for penicillin. Compartmentalization literally saves his...
if what we are observing doesn't constitute evidence against the Singularity in your opinion, then what would?
I'm not marchdown, but:
Estimating the probability of a Singularity requires looking at various possible advantages of digital minds and asking what would constitute evidence against such advantages being possible. Some possibilities:
The trouble is that even proper beliefs can be inadequately connected to other proper beliefs inside the human mind.
Proper beliefs can be too independent; if you have a belief network A -> B and the probabilities of 'B given A' and 'B given not-A' are similar, A doesn't have much value when you care about B. It doesn't change your belief much, because it isn't connected very much.
But my guess is most human brains have "A -> B" and don't have "B given A" and "B given not A". So they don't check the difference, so they...
I don't know if this is an answer or a rephrasing of the problem, but "making sure your beliefs are propagated to the rest of your knowledge" is what I classified as the Level 2 Understanding.
A couple weeks after meeting me, Will Newsome gave me one of the best compliments I’ve ever received. He said: “Luke seems to have two copies of the Take Ideas Seriously gene.”
What did Will mean? To take an idea seriously is “to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded,” as in a Bayesian belief network (see right).
Belief propagation is what happened, for example, when I first encountered that thundering paragraph from I.J. Good (1965):
Good’s paragraph ran me over like a train. Not because it was absurd, but because it was clearly true. Intelligence explosion was a direct consequence of things I already believed, I just hadn’t noticed! Humans do not automatically propagate their beliefs, so I hadn’t noticed that my worldview already implied intelligence explosion.
I spent a week looking for counterarguments, to check whether I was missing something, and then accepted intelligence explosion to be likely (so long as scientific progress continued). And though I hadn’t read Eliezer on the complexity of value, I had read David Hume and Joshua Greene. So I already understood that an arbitrary artificial intelligence would almost certainly not share our values.
Accepting my belief update about intelligence explosion, I propagated its implications throughout my web of beliefs. I realized that:
I had encountered the I.J. Good paragraph on Less Wrong, so I put my other projects on hold and spent the next month reading almost everything Eliezer had written. I also found articles by Nick Bostrom and Steve Omohundro. I began writing articles for Less Wrong and learning from the community. I applied to Singularity Institute’s Visiting Fellows program and was accepted. I quit my job in L.A., moved to Berkeley, worked my ass off, got hired, and started collecting research related to rationality and intelligence explosion.
My story surprises people because it is unusual. Human brains don’t usually propagate new beliefs so thoroughly.
But this isn’t just another post on taking ideas seriously. Will already offered some ideas on how to propagate beliefs. He also listed some ideas that most people probably aren’t taking seriously enough. My purpose here is to examine one prerequisite of successful belief propagation: actually making sure your beliefs are connected to each other in the first place.
If your beliefs aren’t connected to each other, there may be no paths along which you can propagate a new belief update.
I’m not talking about the problem of free-floating beliefs that don’t control your anticipations. No, I’m talking about “proper” beliefs that require observation, can be updated by evidence, and pay rent in anticipated experiences. The trouble is that even proper beliefs can be inadequately connected to other proper beliefs inside the human mind.
I wrote this post because I'm not sure what the "making sure your beliefs are actually connected in the first place" skill looks like when broken down to the 5-second level.
I was chatting about this with atucker, who told me he noticed that successful businessmen may have this trait more often than others. But what are they doing, at the 5-second level? What are people like Eliezer and Carl doing? How does one engage in the purposeful decompartmentalization of one's own mind?