I, the author, no longer endorse this post.
Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.
Eliezer Yudkowsky and Michael Vassar are two rationalists who have something of an aura of formadability about them. This is especially true of Michael Vassar in live conversation, where he's allowed to jump around from concept to concept without being penalized for not having a strong thesis. Eliezer did something similar in his writing by creating a foundation of reason upon which he could build new concepts without having to start explaining everything anew every time. Michael and Eliezer know a lot of stuff, and are able to make connections between the things that they know; seeing which nodes of knowledge are relevant to their beliefs or decision, or if that fails, knowing which algorithm they should use to figure out which nodes of knowledge are likely to be relevant. They have all the standard Less Wrong rationality tools too, of course, and a fair amount of heuristics and dispositions that haven't been covered on Less Wrong. But I believe it is this aspect of their rationality, the coherent and cohesive and carefully balanced web of knowledge and belief nodes, that causes people to perceive them as formidable rationalists, of a kind not to be disagreed with lightly.
The common trait of Michael and Eliezer and all top tier rationalists is their drive to really consider the implications and relationships of their beliefs. It's something like a failure to compartmentalize; it's what has led them to developing their specific webs of knowledge, instead of developing one web of beliefs about politics that is completely separate from their webs of belief about religion, or science, or geography. Compartmentalization is the natural and automatic process by which belief nodes or groups of beliefs nodes become isolated from their overarching web of beliefs, or many independent webs are created, or the threads between nodes are not carefully and precisely maintained. It is the ground state of your average scientist. When Eliezer first read about the idea of a Singularity, he didn't do exactly what I and probably almost anybody in the world would have done at that moment: he didn't think "Wow, that's pretty neat!" and then go on to study string theory. He immediately saw that this was an idea that needed to be taken seriously, a belief node of great importance that necessarily affects every other belief in the web. It's something that I don't have naturally (not that it's either binary or genetic), but it's a skill that I'm reasonably sure can be picked up and used immediately, as long as you have a decent grasp of the fundamentals of rationality (as can be found in the Sequences).
Taking an idea seriously means:
- Looking at how a new idea fits in with your model of reality and checking for contradictions or tensions that may indicate the need of updating a belief, and then propagating that belief update through the entire web of beliefs in which it is embedded. When a belief or a set of beliefs change that can in turn have huge effects on your overarching web of interconnected beliefs. (The best example I can think of is religious deconversion: there are a great many things you have to change about how you see the world after deconversion, even deconversion from something like deism. I sometimes wish I could have had such an experience. I can only imagine that it must feel both terrifying and exhilarating.) Failing to propagate that change leads to trouble. Compartmentalization is dangerous.
- Noticing when an idea seems to be describing a part of the territory where you have no map. Drawing a rough sketch of the newfound territory and then seeing in what ways that changes how you understand the parts of the territory you've already mapped.
- Not just examining an idea's surface features and then accepting or dismissing it. Instead looking for deep causes. Not internally playing a game of reference class tennis.
- Explicitly reasoning through why you think the idea might be correct or incorrect, what implications it might have both ways, and leaving a line of retreat in both directions. Having something to protect should fuel your curiosity and prevent motivated stopping.
- Noticing confusion.
- Recognizing when a true or false belief about an idea might lead to drastic changes in expected utility.
There are many ideas that should be taken a lot more seriously, both by society and by Less Wrong specifically. Here are a few:
- Existential risks and the possibilities for methods of prevention thereof.
- Molecular nanotechnology.
- The technological singularity (especially timelines and planning).
- Cryonics.
- World economic collapse.
Some potentially important ideas that I readily admit to not yet having taken seriously enough:
- Molecular nanotechnology timelines.
- Ways to protect against bioterrorism.
- The effects of drugs of various kinds and methodologies for researching them.
- Intelligence amplification.
And some ideas that I did not immediately take seriously when I should have:
- Tegmark's multiverses and related cosmology and the manyfold implications thereof (and the related simulation argument).
- The subjective for-Will-Newsome-personally irrationality of cryonics.1
- EMP attacks.
- Updateless-like decision theory and the implications thereof.
- That philosophical and especially metaphysical intuitions are not strong evidence.
- The idea of taking ideas seriously.
- And various things that I probably should have taken seriously, and would have if I had known how to, but that I now forget because I failed to grasp their gravity at the time.
I also suspect that there are ideas that I should be taking seriously but do not yet know enough about; for example, maybe something to do with my diet. I could very well be poisoning myself and my cognition without knowing it because I haven't looked into the possible dangers of the various things I eat. Maybe corn syrup is bad for me? I dunno; but nobody's ever sat me down and told me I should look into it, so I haven't. That's the problem with ideas that really deserve to be taken seriously: it's very rare that someone will take the time to make you do the research and really think about it in a rational and precise manner. They won't call you out when you fail to do so. They won't hold you to a high standard. You must hold yourself to that standard, or you'll fail.
Why should you take ideas seriously? Well, if you have Something To Protect, then the answer is obvious. That's always been my inspiration for taking ideas seriously: I force myself to investigate any way to help that which I value to flourish. This manifests on both the small and the large scale: if a friend is going to get a medical operation, I research the relevant literature and make sure that the operation works or that it's safe. And if I find out that the development of an unFriendly artificial intelligence might lead to the pointless destruction of everyone I love and everything I care about and any value that could be extracted from this vast universe, then I research the relevant literature there, too. And then I keep on researching. What if you don't have Something To Protect? If you simply have a desire to figure out the world -- maybe not an explicit desire for intsrumental rationality, but at least epistemic rationality -- then taking ideas seriously is the only way to figure out what's actually going on. For someone passionate about answering life's fundamental questions to miss out on Tegmark's cosmology is truly tragic. That person is losing a vista of amazing perspectives that may or may not end up allowing them to find what they seek, but at the very least is going to change for the better the way they think about the world.
Failure to take ideas seriously can lead to all kinds of bad outcomes. On the societal level, it leads to a world where almost no attention is paid to catastrophic risks like nuclear EMP attacks. It leads to scientists talking about spirituality with a tone of reverence. It leads to statisticians playing the lottery. It leads to an academia where an AGI researcher who completely understands that a universe is naturalistic and beyond the reach of God fails to realize that this means an AGI could be really, really dangerous. Even people who make entire careers out of an idea somehow fail to take it seriously, to see its implications and how it should move in perfect alignment with every single one of their actions and beliefs. If we could move in such perfect alignment, we would be gods. To be a god is to see the interconnectedness of all things and shape reality accordingly. We're not even close. (I hear some folks are working on it.) But if we are to become stronger that is the ideal we must approximate.
Now, I must disclaim: taking certain ideas seriously is not always best for your mental health. There are some cases where it is best to recognize this and move on to other ideas. Brains are fragile and some ideas are viruses that cause chaotic mutations in your web of beliefs. Curiosity and dilligence are not always your friend, and even those with exceptionally high SAN points can't read too much Eldritch lore before having to retreat. Not only can ignorance be bliss, it can also be the instrumentally rational state of mind.2
What are ideas you think Less Wrong hasn't taken seriously? Which haven't you taken seriously, but would like to once you find the time or gain the prerequisite knowledge? Is it best to have many loosely connected webs of belief, or one tightly integrated one? Do you have examples of a fully executed belief update leading to massive or chaotic changes in a web of belief? Alzheimer's disease may be considered an 'update' where parts of the web of belief are simply erased, and I've already listed deconversion as another. What kinds of advantages could compartmentalization give a rationalist?
1 I should write a post about reasons for people under 30 not to sign up for cryonics. However, doing so would require writing a post about Singularity timelines, and I really really don't want to write that one. It seems that a lot of LWers have AGI timelines that I would consider... erm, ridiculous. I've asked Peter de Blanc to bear the burden of proof and I'm going to bug him about it every day until he writes up the article.
2 If you snarl at this idea, try playing with this Litany, and then playing with how you play with this Litany:
If believing something that is false gets me utility,
I desire to believe in that falsity;
If believing something that is true gets me utility,
I desire to believe in that truth;
Let me not become attached to states of belief that do not get me utility.
I disagree with your first two paragraphs. Without a demonstration that the Born rule is somehow special (yields the most stable world for working complex machines, or something), the argument is still disappointingly circular. For example, if some other rule turns out to be even more conducive to evolution, the anthropic question arises: why aren't we in that world instead of this one? (Kinda like the Boltzmann brain problem, but in reverse.) Fortunately, checking the macroscopic behavior that arises from quantum physics under different assumed measures is a completely empirical question. Now I just need to understand enough math to build a toy model and see for myself how it pans out. For the record, I'm about 70% confident that this line of inquiry will fail, because other worlds will look just as stable and macroscopically lawful as ours.
An FAI that doesn't help our world is a big fat piece of fail. Can I please have a machine that's based on less lofty abstractions, but actually does stuff?
Could you frame the debate to avoid ambiguity? What argument do you refer to (in your own words)? In what way is it circular? (I feel that the structure of the argument is roughly that the answer to the question "what is 2+2?" is "4", because the algebraic laws assumed in the question imply 4 as the answer, even though other algebraic laws can lead to other answers.)
... (read more)