Epistle to the New York Less Wrongians
(At the suggestion and request of Tom McCabe, I'm posting the essay that I sent to the New York LW group after my first visit there, and before the second visit:)
Having some kind of global rationalist community come into existence seems like a quite extremely good idea. The NYLW group is the forerunner of that, the first group of LW-style rationalists to form a real community, and to confront the challenges involved in staying on track while growing as a community.
"Stay on track toward what?" you ask, and my best shot at describing the vision is as follows:
"Through rationality we shall become awesome, and invent and test systematic methods for making people awesome, and plot to optimize everything in sight, and the more fun we have the more people will want to join us."
(That last part is something I only realized was Really Important after visiting New York.)
Extreme Rationality: It Could Be Great
Reply to: Extreme Rationality: It's Not That Great
I considered making this into a comment on Yvain's last post, but I'd like to redirect the discussion slightly. Yvain's warning is important, but we're left with the question of how to turn the current state of the art in rationality into something great. I think we are all on the same page that more is possible. Now we just need to know how to get there.
Even though Yvain disapproved of Eliezer's recent post on day jobs, I thought the two shared a common thread: rationalists should be careful about staying in Far-mode too long. I took Eliezer's point to be more about well-developed rationalist communities, and Yvain's to be about our rag-tag band of aspirants, but I think they are both speaking to the same issue. All of this has to be for a purpose, and we can't become ungrounded.
Near- and Far-mode have to be balanced. This shouldn't be surprising, because in this context, Near and Far roughly equate to applied and theoretical work. The two intermingle and build off one another. The history of math and physics is filled with paired problems: calculus and dynamics, Fourier series and heat distribution, least-squares and astronomy, etc. Real world problems need theory to be solved, but theory needs problems to motivate and test it.
My guess is that any large subject develops through the following iterative alteration between Near and Far:
F1. Develop general theory.
F2. Refine and check for consistency and correctness.
F3. Consolidate theory.
N1. Apply existing theory to problems.
N2. Evaluate successes and failures.
GOTO F1.
Rational Groups Kick Ass
Reply to: Extreme Rationality: It's Not That Great
Belaboring of: Rational Me Or We?
Related to: A Sense That More Is Possible
The success of Yvain's post threw me off completely. My experience has been opposite to what he describes: x-rationality, which I've been working on since the mid-to-late nineties, has been centrally important to successses I've had in business and family life. Yet the LessWrong community, which I greatly respect, broadly endorsed Yvain's argument that:
There seems to me to be approximately zero empirical evidence that x-rationality has a large effect on your practical success, and some anecdotal empirical evidence against it.
So that left me pondering what's different in my experience. I've been working on these things longer than most, and am more skilled than many, but that seemed unlikely to be the key.
The difference, I now think, is that I've been lucky enough to spend huge amounts of time in deeply rationalist organizations and groups--the companies I've worked at, my marriage, my circle of friends.
And rational groups kick ass.
An individual can unpack free will or figure out that the Copenhagen interpretation is nonsense. But I agree with Yvain that in a lonely rationalist's individual life, the extra oomph of x-rationality may well be drowned in the noise of all the other factors of success and failure.
But groups! Groups magnify the importance of rational thinking tremendously:
- Whereas a rational individual is still limited by her individual intelligence, creativity, and charisma, a rational group can promote the single best idea, leader, or method out of hundreds or thousands or millions.
- Groups have powerful feedback loops; small dysfunctions can grow into disaster by repeated reflection, and small positives can cascade into massive success.
- In a particularly powerful feedback process, groups can select for and promote exceptional members.
- Groups can establish rules/norms/patterns that 1) directly improve members and 2) counteract members' weaknesses.
- Groups often operate in spaces where small differences are crucial. Companies with slightly better risk management are currently preparing to dominate the financial space. Countries with slightly more rational systems have generated the 0.5% of extra annual growth that leads, over centuries, to dramatically improved ways of life. Even in family life, a bit more rationality can easily be the difference between gradual divergence and gradual convergence.
And we're not even talking about the extra power of x-rationality. Imagine a couple that truly understood Aumann, a company that grokked the Planning Fallacy, a polity that consistently tried Pulling the Rope Sideways.
When it comes to groups--sized from two to a billion--Yvain couldn't be more wrong.
Go Forth and Create the Art!
Previously in series: Well-Kept Gardens Die By Pacifism
Followup to: My Way
I have said a thing or two about rationality, these past months. I have said a thing or two about how to untangle questions that have become confused, and how to tell the difference between real reasoning and fake reasoning, and the will to become stronger that leads you to try before you flee; I have said something about doing the impossible.
And these are all techniques that I developed in the course of my own projects—which is why there is so much about cognitive reductionism, say—and it is possible that your mileage may vary in trying to apply it yourself. The one's mileage may vary. Still, those wandering about asking "But what good is it?" might consider rereading some of the earlier posts; knowing about e.g. the conjunction fallacy and how to spot it in an argument, hardly seems esoteric. Understanding why motivated skepticism is bad for you can constitute the whole difference, I suspect, between a smart person who ends up smart and a smart person who ends up stupid. Affective death spirals consume many among the unwary...
Yet there is, I think, more absent than present in this "art of rationality"—defeating akrasia and coordinating groups are two of the deficits I feel most keenly. I've concentrated more heavily on epistemic rationality than instrumental rationality, in general. And then there's training, teaching, verification, and becoming a proper experimental science based on that. And if you generalize a bit further, then building the Art could also be taken to include issues like developing better introductory literature, developing better slogans for public relations, establishing common cause with other Enlightenment subtasks, analyzing and addressing the gender imbalance problem...
But those small pieces of rationality that I've set out... I hope... just maybe...
I suspect—you could even call it a guess—that there is a barrier to getting started, in this matter of rationality. Where by default, in the beginning, you don't have enough to build on. Indeed so little that you don't have a clue that more exists, that there is an Art to be found. And if you do begin to sense that more is possible—then you may just instantaneously go wrong. As David Stove observes—I'm not going to link it, because it deserves its own post—most "great thinkers" in philosophy, e.g. Hegel, are properly objects of pity. That's what happens by default to anyone who sets out to develop the art of thinking; they develop fake answers.
Eliezer Yudkowsky Facts
- Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
- Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.
- Eliezer Yudkowsky's favorite food is printouts of Rice's theorem.
- Eliezer Yudkowsky's favorite fighting technique is a roundhouse dustspeck to the face.
- Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
- Eliezer Yudkowsky once held up a sheet of paper and said, "A blank map does not correspond to a blank territory". It was thus that the universe was created.
- If you dial Chaitin's Omega, you get Eliezer Yudkowsky on the phone.
- Unless otherwise specified, Eliezer Yudkowsky knows everything that he isn't telling you.
- Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.
- Eliezer Yudkowsky is the Muhammad Ali of one-boxing.
- Eliezer Yudkowsky is a 1400 year old avatar of the Aztec god Aixitl.
- The game of "Go" was abbreviated from "Go Home, For You Cannot Defeat Eliezer Yudkowsky".
- When Eliezer Yudkowsky gets bored, he pinches his mouth shut at the 1/3 and 2/3 points and pretends to be a General Systems Vehicle holding a conversation among itselves. On several occasions he has managed to fool bystanders.
- Eliezer Yudkowsky has a swiss army knife that has folded into it a corkscrew, a pair of scissors, an instance of AIXI which Eliezer once beat at tic tac toe, an identical swiss army knife, and Douglas Hofstadter.
- If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky.
- Eliezer Yudkowsky has no need for induction or deduction. He has perfected the undiluted master art of duction.
- There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.
- There is no spacetime symmetry. Eliezer Yudkowsky just sometimes holds the territory upside down, and he doesn't care.
- Eliezer Yudkowsky has no need for doctors. He has implemented a Universal Curing Machine in a system made out of five marbles, three pieces of plastic, and some of MacGyver's fingernail clippings.
- Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.
If you know more Eliezer Yudkowsky facts, post them in the comments.
Spock's Dirty Little Secret
Related on OB: Priming and Contamination
Related on LW: When Truth Isn't Enough
When I was a kid, I wanted to be like Mr. Spock on Star Trek. He was smart, he could kick ass, and he usually saved the day while Kirk was too busy pontificating or womanizing.
And since Spock loved logic, I tried to learn something about it myself. But by the time I was 13 or 14, grasping the basics of boolean algebra (from borrowed computer science textbooks), and propositional logic (through a game of "Wff'n'Proof" I picked up at a garage sale), I began to get a little dissatisfied with it.
Spock had made it seem like logic was some sort of "formidable" thing, with which you could do all kinds of awesomeness. But real logic didn't seem to work the same way.
I mean, sure, it was neat that you could apply all these algebraic transforms and dissect things in interesting ways, but none of it seemed to go anywhere.
Logic didn't say, "thou shalt perform this sequence of transformations and thereby produce an Answer". Instead, it said something more like, "do whatever you want, as long as it's well-formed"... and left the very real question of what it was you wanted, as an exercise for the logician.
And it was at that point that I realized something that Spock hadn't mentioned (yet): that logic was only the beginning of wisdom, not the end.
Of course, I didn't phrase it exactly that way myself... but I did see that logic could only be used to check things... not to generate them. The ideas to be checked, still had to come from somewhere.
But where?
When I was 17, in college philosophy class, I learned another limitation of logic: or more precisely, of the brains with which we do logic.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)