"the map is not the territory" has stuck in my mind as one of the over-arching principles of rationality. it reinforces the concept of self-doubt, implies one should work to make their map conform more closely to the territory, and is invaluable when one believes to have hit a cognitive wall. there are no walls, just the ones drawn on your map.
the post, "mysterious answers to mysterious questions" is my favorite post that dealt with this topic, though it has been reiterated (and rightly so) over a multitude of postings.
link: http://www.overcomingbias.com/2007/08/mysterious-answ.html
"Newcomb's Problem and Regret of Rationality" is one of my favorites. For all the excellent tools of rationality that stuck with me, this is the one that most globally encompassed Eliezer's general message: that rationality is about success, first and foremost, and if whatever you're doing isn't getting you the best outcome, then you're not being rational, even if you appear rational.
"A rationalist should win". Very high-level meta-advice and almost impossible to directly apply, but it keeps me oriented.
I was late to vote to put it mildly, but nonetheless... The Power of Intellect. This is probably what impressed me the most and changed my attitude towards intelligence like Spock. From memory: a gun is stronger than the brain. As if people were born with them. Social skills are more important than intelligence. As if charisma is in the kidneys. Money is more powerful than the mind. As if they grew on trees. And people ask how AI can make money when it only has a mind. A million years ago, soft creatures roamed the savanna, and you would call it absurd to claim that they will rule the planet, and not lions. Soft Creatures have no armor, claws or poison. How could they work metal if they don't breathe fire? If you say that they split the nucleus of an atom, then this is just nonsense, they are not even radioactive. And evolution will not have time to work here, because it's just that no one can reproduce so fast as to get all these results in just a million years. But the brain is more dangerous than nuclear weapons, because the brain generates nuclear weapons and things are even more dangerous. Look at the difference between man and monkey. And now tell me what arti...
Your explanation / definition of intelligence as an optimization process. (Efficient Cross-Domain Optimization)
That was a major "aha" moment for me.
The most important thing I can recall is conservation of expectation. In particular, I'm thinking of Making Beliefs Pay Rent and Conservation of Expected Evidence. We need to see a greater commitment to deciding in advance which direction new evidence will shift our beliefs.
Most frequently referenced concepts:
Engines of cognition was the final thing I needed to assimilate the idea that nothing's for free and that intelligence does not magically allow to do anything, has a cost, limitations, and obey the second law of thermodynamics. Or rather, that they both obey the same underlying principle.
The most important thing I learned from Overcoming Bias was to stop viewing the human mind as a blank slate, ideally a blank slate, an approximation to a blank slate, or anything with properties even slightly resembling blankness or slateness. The rest is just commentary - admittedly very, very good commentary.
The posts I associate with this are everything on evolutionary psychology such as Godshatter (second most important thing I learned: study evolutionary psychology!), the free will series, the "ghost in the machine" and "ideal philosopher of perfect emptiness" series, and the Mind Projection Fallacy.
The biggest "aha" post was probably the one linking thermodynamics to beliefs ( The Second Law of Thermodynamics, and Engines of Cognition, and the following one, Perpetual Motion Beliefs ), because it linked two subjects I knew about in a surprising and interesting way, deepening my understanding of both.
Apart from that, "Tsuyoku Naritai" was the one that got me hooked, though I didn't really "learn" anything by it - I like the attitude it portrays.
"Obviously the vast majority of my OB content can't go into the book, because there's so much of it."
I know this is not what you asked for, but I'd like to vote for a long book. I feel that the kind of people who will be interested by it (and readers of OB) probably won't be intimidated by the page count, and I know that I'd really like to have a polished paper copy of most of the OB material for future reference. The web just isn't quite the same.
In short: Something that is Godel, Escher, Bach-like in lenght probably wouldn't be a problem, though maybe there are other good reasons to keep it shorter other than "there is too much material".
I'm going to have to choose "How to Convince Me That 2 + 2 = 3." It did quite a lot to illuminate the true nature of uncertainty.
http://www.overcomingbias.com/2007/09/how-to-convince.html
The ideas in itare certainly not the most important, but another really striking posts for me is "Surprised by Brains." The lines "Skeptic: Yeah? Let's hear you assign a probability that a brain the size of a planet could produce a new complex design in a single day. / Believer: The size of a planet? (Thinks.) Um... ten percent." in particular are really helpful in fighting biases that cause me to regard conservative estimates as somehow more virtuous.
A while back, I posted on my blog two lists with the posts I considered the most useful on Overcoming Bias so far.
If I just had to pick one? That's tough, but perhaps burdensome details. The skill of both cutting away all the useless details from predictions, and seeing the burdensome details in the predictions of others.
An example: Even though I was pretty firmly an atheist before, arguments like "people have received messages from the other side, so there might be a god" wouldn't have appeared structurally in error. I would have questioned whet...
Hard to pick a favourite, of course, but there's a warning against confirmation bias that cautions us against standing firm, to move with the evidence like grass in the wind, that has stuck with me.
On the general discussions of what sort of book I want, I want one no more than a couple of hundred pages long which I can press into the hands of as many of my friends as possible. One that speaks as straightforwardly as possible, without all the self-aggrandizing eastern-guru type language...
A near-tie. Either:
(1) The Bottom Line, or
(2) Realizing there's actually something at stake that, like, having accurate conclusions really matters for (largely, Eliezer's article on heuristics and biases in global catastrophic risks, which I read shortly before finding OB), or
(3) Eliezer's re-definition of humility in "12 virtues", and the notion in general that I should aim to see how far my knowledge can take me, and to infer all I can, rather than just aiming to not be wrong (by erring on the side of underconfidence).
(1) wasn't a new tho...
I'm going to go with "Knowing About Biases Can Hurt People", but only because I got the Mind Projection Fallacy straight from Jaynes.
The most important thing for me, basically, was the morality sequence and in particular The Moral Void. I was worrying heavily about whether any of the morals I valued were justified in a universe that lacked Intrinsic Meaning. The Morality sequence (and Nietzsche, incidentally) helped me internalize that it's OK after all to value certain things— that it's not irrational to have a morality— that there's no Universal Judge condemning me for the crime of parochialism if I value myself, my friends, humanity, beauty, knowledge, etc— and that even my flight ...
I refuse to name just one thing. I can't rank a number of ideas by how important they were relative to each other, they were each important in their own right. So, to preserve the voting format, I'll just split my suggestions into several comments.
Some notes in general. The first year I used to partially misinterpret some of your essays, but after I got a better grasp of underlying ideas, I saw many of the essays as not contributing any new knowledge. This is not to say that the essays were unimportant: they act as exercises, exploring the relevant ideas i...
The most important and useful thing I learned from your OB posts, Eliezer, is probably the mind-projection fallacy: the knowledge that the adjective "probable" and the adverb "probably" always makes an implicit reference to an agent (usually the speaker).
Honorable mention: the fact that there is no learning without (inductive) bias.
It's hard to answer this question, given how much of your philosophy I have incorporated wholesale into my own, but I think it's the fundamental idea that there are Iron Laws of evidence, that they constrain exactly what it is reasonable to believe, and that no mere silly human conceit such as "argument" or "faith" can change them even in the millionth decimal place.
The most important thing I learned may have been how to distinguish actual beliefs from meaningless sounds that come out of our mouths. Beliefs have to pay the rent. (http://www.overcomingbias.com/2007/07/making-beliefs-.html)
If my priors are right, then genuinely new evidence is a random walk. Especially: when I see something complicated I think is new evidence and think the story behind it is obviously something confirming my beliefs in every particular, I need to be very suspicious.
http://www.overcomingbias.com/2007/08/conservation-of.html
I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.
I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages.
For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.
I've been enjoying the majority of OB posts, but here's the list of ideas I consider the most important for me:
Intelligence as a process steering the future into a constrained region.
The map / territory distinction.
The use of probability theory to quantify the degree of belief.
Is this to be a book that somebody could give to their grandmother and expect the first page to convince her that the second is worth reading?
The Wrong Question sequence was amazing. One of the very unintuitive sequences that greatly improved my categorization methods. Especially with the 'Disguised Queries' post.
Your debunking of philosophical zombieism really stuck with me. I don't think I've ever done a faster 180 on my stance on a philosophical argument.
The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.
definitely "materialism"...especially the idea that there are no ontologically basic mental entities.
The most important thing for me, is the near-far bias - even though that's a relatively recent "discovery" here, it still resonates very well with why I argue with people about things, and why people who I respect argue with each other.
All things that, if pushed with the right questions, I'd have come to on my own, but all three put very beautifully.
That clear thinking can take you from obvious but wrong to non-obvious but right, and on issues of great importance. That we frequently incur great costs just because we're not really nailing things down.
Looking over the list of posts, I suggest the ones starting with 'Fake'
Shut up and do the impossible, and dependencies.
The concept of fighting a rearguard action against the truth.
The series of post about the "free will". I was always a determinist but somehow refused to think about "free will" in detail, holding a belief that determinism and free will are compatible for some mysterious reason. OB helped me to see things clearly (now it seems all pretty obvious).
I vote for "Conservation of Expected Evidence." The essential answer to supposed evidence from irrationalists.
Second place, either "Occam's Razor" or "Decoherence is Falsifiable and Testable" for the understandable explanation of technical definitions of Occam's Razor.
The intuitive breakthrough for me was realizing that given a proposition P and an argument A that supports or opposes P, then showing that A is invalid has no effect on the truth or falsehood of P, and showing that P is true has no effect on the validity of A. This is the core of the "knowing biases can hurt you" problem, and while it's obvious if put in formal terms, it's counterintuitive in practice. The best way to get that to sink in, I think, is to practice demolishing bad arguments that support a conclusion you agree with.
"You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions"
It wasn't much of an "aha!" moment- when I first read it, I thought something along the lines of "Of course higher standards are possible, but if no one can find flaws in your argument, you're doing pretty well." but the more I thought about it, the more I realized that EY made a good point. I had later stumbled upon flaws in my long standing arguments...
I've been reading OB for a comparatively short time, so I haven't yet been through the vast majority of your posts. But "The Sheer Folly of Callow Youth" really puts in perspective the importance of truth-seeking and why its necessary.
Quote: "Of this I learn the lesson: You cannot manipulate confusion. You cannot make clever plans to work around the holes in your understanding. You can't even make "best guesses" about things which fundamentally confuse you, and relate them to other confusing things. Well, you can, but you won...
For me this is a tough question since I've been reading your stuff for nearly 10 years now, but thinking of only OB I'd have to say it was the quantum physics stuff, but only because I had encountered essentially everything else in one form or another already, so your writing was just refining the way of presenting what I had already generally learned from you.
Clearing up my meta-ethical confusion regarding utilitarianism. From The "Intuitions" Behind "Utilitarianism":
Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.
Realizing that the expression of any set of values must inherently "sum to 1" was quite an abrupt and obviously-true-in-retrospect revelation.
This is really from times before OB, and might be all too obvious, but the most important thing I’ve learned from your writings (so far) is bayesian probability. I had come in touch with the concept previously, but I didn’t understand it fully or understand why it was very important until I read your early explanatory essays on the topic. When you write your book, I’m sure that you will not neglect to include really good explanations of these things, suitable for people who have never heard of them before, but since no one else has mentioned it in this thread so far, I thought I might.
1) I learned to reconcile my postmodernist trends with physical reality. Sounds cryptic? Well let's say I learned to appreciate science a little more than I did.
2) I learned to think more "statistically" and probabilistically - though I didn't learn to multiply.
3) Winning is also a pretty good catch-word for an attitude of mind - and maybe a better title than less-wrong.
"Thou art Godshatter" -- this was one of the first posts I read, and it made the entire heuristics and biases program feel more immediate / compelling than before
Expecting Short Inferential Distances
The 'Shut up and do the impossible' sequence.
Newcombe's problem.
Godshatter.
Einstein's arrogance.
Joy in the merely real.
The cartoon Godel's theorem.
Science isn't strict enough.
The bottom line.
Well, I'd say the most important thing I learned was to be less confident when taking a stand on controversial topics. So to that end, I'll nominate
The Simple Truth followed by A Technical Explanation of Technical Explanation, given some familiarity with probability theory, formed the basic understanding of Bayesian perspective on probability as quantity of belief. The most confusing point of Technical Explanation involving a tentacle was amended in the post about antiprediction on OB. It's very important to get this argument early on, as it forms the language for thinking about knowledge.
Thanks for the link to the list - I keep forgetting that exists. And thanks again to Andrew Hay for making it.
That said, I don't think I would say I learned anything from your OB posts, at least about rationality. I think I did learn about young Eliezer and possibly about aspiring rationalists in general. If that's a reasonable topic, then I'd have to suggest something in the "Young Eliezer" sequence, possibly My Wild and Reckless Youth.
There are several variations on the questions you're asking that I think I could find answers to:
"Which...
I liked philosophy before OB, so I knew you were supposed to question everything. OB revelealed new things to question, and taught me to expect genuine answers.
"I suspect that most existential angst is not really existential. I think that most of what is labeled 'existential angst' comes from trying to solve the wrong problem", from Existential Angst Factory.
I don't know about "most important", but the one post that really stuck in my mind was Archimedes's Chronophone. I spent a while thinking about that one.
Just did a quick search of this page and it didn't turn up... so, by far, the most memorable and referred-to post I've read on OB is Crisis of Faith.
I really can't think of any one single thing. Part of it is I think I hadn't yet "dehindsightbiased" myself, (still haven't, except now sometimes I can catch myself as it's happening and say "No! I didn't know that before, stop trying to pretend that I did.")
Another part is that lots of posts helped crystallize/sharpen notions I'd been a bit fuzzy on. Part of it is just, well, the total effect.
Stuff like the Evolution sequence and so on were useful to me too.
If I had to pick one thing that stands out in my mind though, I guess I'd have ...
The idea that the purpose of the law is to provide structure for optimization.
I'm not sure this is the most important thing I've learned yet, but it's the only really 'aha' moment I've had in the admittedly small sample I've been able to catch up on thus far.
I find I think about this most often as I contemplate the effect traffic laws and implements have in shaping my 20 minute optimization exercise in getting to work each morning.
I'm not sure I've "learned" anything. You've largely convinced me that we don't really "know" anything but rather have varying degrees of belief, but I believed that to some degree before reading this site and am not 100% convinced of it now.
The most important thing I can think of that I would have said is almost certainly wrong before and that I'd say is probably right now is that it is legitimate to multiply the utility of a possible outcome by its probability to get the utility of the possibility.
Prices or Bindings? and to a lesser extent (although with simpler formal statement) Newcomb's Problem and The True Prisoner's Dilemma: show just how insanely alien the rational thing can be, even if it's directed to your own cause. You may need to conscientiously avoid preventing the world destruction, not take free money, and trade a billion human lives for one paperclip.
Righting a Wrong Question: how everything you observe calls for understanding, how even an utter confusion or a lie can communicate positive knowledge. There are always causes behind any apparent confusion, so if the situation doesn't make sense in a way it's supposed to be interpreted, you can always step back and see how it really works, even if you are not supposed to look at the situation this way. For example, don't trust you thought, instead catch your own mind in the process of making a mistake.
Overcoming Bias: Thou Art Godshatter: understanding how insanely intricate human psychology is, and how one should avoid inventing simplistic Fake Utility Functions for human behavior. I used to make this mistake. Also relevant: Detached Lever Fallacy, how there's more to other mental operations than meets the eye.
Intelligence as a blind optimization process shaping the future -- esp. in comparison with evolution -- and how the effect of our built-in anthropomorphism makes us see intelligence as existing, when in fact, ALL intelligence is blind. Some intelligence processes are just a little less blind than others.
(Somewhat offtopic, but related: some studies show that the number of "good" ideas produced by any process is linearly proportional to the TOTAL number of ideas produced by that process... which suggests that even human intelligence searches blindly, once we go past the scope of our existing knowledge and heuristics.)
I'm going to echo CatDancer: for me the most valuable insight was that a little information goes a very long way. From the example of the simulated beings breaking out to the Bayescraft interludes to the few observations and lots of cogitations in Three Worlds Collide to GuySrinivasan's random-walk point, I've become more convinced that you can get a surprising amount of utility out of a little data; this changes other beliefs like my assessment of how possible AI rapid takeoff is.
The most important thing I learned was the high value of the outside perspective. It is something that I strive to deploy deliberately through getting into intentional friendships with other aspiring rationalists at Intentional Insights. We support each other’s ability to achieve goals in life through what we came to call a goal buddy system, providing an intentional outside perspective on each other’s thinking about life projects and priorities.
The explanation of Bayes Theorem and pointer to E. T. Jaynes. It gave me a statistics that is useful as well as rigorous, as opposed to the gratuitously arcane and not very useful frequentist stuff I was exposed to in grad school.
Second would be the quantum mechanics posts - finally an understandable explanation of the MW interpretation.
#1: Teacher's Password http://www.overcomingbias.com/2007/08/guessing-the-te.html
I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.
I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.
I happened to have a young child about to enter elementary school when I read that, and it crystalized my concern about rote memorization. I forced many fellow parents to read the essay as well.
I realize you mostly care about #1, but just for more data: #2 I'd probably put the Quantum Physics sequence, although that is a large number of posts, and the effect is hard to summarize in a few pages. For #3 I liked (within evolution) that we are adaptation-executers, not fitness-maximizers.
Priors as Mathematical Objects: prior is not something arbitrary, a state of lack-of-knowledge, nor can sufficient evidence turn arbitrary prior into precise belief. Prior is the whole algorithm of what to do with evidence, and bad prior can easily turn evidence into stupidity.
P.S. I wonder if this post was downvoted exclusively because of Eliezer's administrative remark, and not because of its content.
I'm going to break with the crowd here.
I don't think that the Overcoming Bias posts, even cleaned up, are suitable for a book on how to be rational. They are something like a sequence of diffs of a codebase as it was developed. You can get a feel of the shape of the codebase by reading the diffs, particularly if you read them steadily, but it's not a great way to communicate the shape.
A book probably needs more procedures on how to behave rationally:
How to use likelihood ratios How to use utility functions Dutch Books: what they are and how to avoid them
The posts are amazing, well connected and very detailed. I think one of the best insights you had was to make concise these biases as the words of your Confessor:
"[human] rationalists learn to discuss an issue as thoroughly as possible before suggesting any solutions. For humans, solutions are sticky...We would not be able to search freely through the solution space, but would be helplessly attracted toward the 'current best' point, once we named it. Also, any endorsement whatever of a solution that has negative moral features, will cause a human to...
My current plan does still call for me to write a rationality book - at some point, and despite all delays - which means I have to decide what goes in the book, and what doesn't. Obviously the vast majority of my OB content can't go into the book, because there's so much of it.
So let me ask - what was the one thing you learned from my posts on Overcoming Bias, that stands out as most important in your mind? If you like, you can also list your numbers 2 and 3, but it will be understood that any upvotes on the comment are just agreeing with the #1, not the others. If it was striking enough that you remember the exact post where you "got it", include that information. If you think the most important thing is for me to rewrite a post from Robin Hanson or another contributor, go ahead and say so. To avoid recency effects, you might want to take a quick glance at this list of all my OB posts before naming anything from just the last month - on the other hand, if you can't remember it even after a year, then it's probably not the most important thing.
Please also distinguish this question from "What was the most frequently useful thing you learned, and how did you use it?" and "What one thing has to go into the book that would (actually) make you buy a copy of that book for someone else you know?" I'll ask those on Saturday and Sunday.
PS: Do please think of your answer before you read the others' comments, of course.