Reply to: Extreme Rationality: It's Not That Great

I considered making this into a comment on Yvain's last post, but I'd like to redirect the discussion slightly. Yvain's warning is important, but we're left with the question of how to turn the current state of the art in rationality into something great. I think we are all on the same page that more is possible. Now we just need to know how to get there.

Even though Yvain disapproved of Eliezer's recent post on day jobs, I thought the two shared a common thread: rationalists should be careful about staying in Far-mode too long. I took Eliezer's point to be more about well-developed rationalist communities, and Yvain's to be about our rag-tag band of aspirants, but I think they are both speaking to the same issue. All of this has to be for a purpose,  and we can't become ungrounded.

Near- and Far-mode have to be balanced. This shouldn't be surprising, because in this context, Near and Far roughly equate to applied and theoretical work. The two intermingle and build off one another. The history of math and physics is filled with paired problems: calculus and dynamics, Fourier series and heat distribution, least-squares and astronomy, etc. Real world problems need theory to be solved, but theory needs problems to motivate and test it.

My guess is that any large subject develops through the following iterative alteration between Near and Far:

    F1. Develop general theory.
    F2. Refine and check for consistency and correctness.
    F3. Consolidate theory.
    N1. Apply existing theory to problems.
    N2. Evaluate successes and failures.
    GOTO F1.

This looks like a close relative of our trusty friend, the scientific method, and is similarly idealized. In terms of this process, I think the Less Wrong community is between F2 and F3. We have lots of phrases, techniques, and standard examples laying around, and work has been done on testing them for conceptual soundness. The wiki represents an attempt to begin consolidating this information so we can move onto more applied domains.

Assuming this process is productive, how long will it take to produce something useful? If Newton invented undergraduate material in math and physics, as is often quipped, I think existing x-rationality theory and techniques are on a JR High level, at best. I'm not surprised x-rationality hasn't produced clear benefits yet. The commonly agreed upon rule of thumb is that it takes about 10 years or 10,000 hours of practice to become an expert in a subject. X-rationality as a subject is around 30 years old, and OB was only founded in 2006. Most of the current experts should be coming from fields like psychology, game theory, logic, physics, economics, or AI, where the 10,000 hours were acquired indirectly over a career. I think rationality theory will count as a success once someone can acquire PhD level expertise in rationality by age 25 or 30 like in other subjects and can spend a career only on these topics.

I'd also like to reemphasize the comments of pjeby and hegemonicon, in conjuction with Yvain, on consciously using x-rationality. I know I need to do more work on integrating OB concepts into my everyday life. I don't think the material referenced in OB isn't going to produce many visible benefits, but I'd bet those concepts will have to come naturally before anything really useful could be learned, much less created. For example, if someone has to consciously think about what the Cartesian plane represents or what a function is, they are going to have a difficult time learning calculus.

I don't think the current lack of success is of too much worry. This is a long-term project, and I'd be suspicious if breakthroughs came too easily. As long as this community stays grounded, and can move between theory and application, I remain hopeful.

Is my assessment of x-rationality's long term prospects correct? How does my vision accord with everyone else's?

New Comment
12 comments, sorted by Click to highlight new comments since:

One of the possibilities moving me to do this earlier rather than later (though it's already pretty late, in a sense) is wondering whether you've got to read all this material at age 15 and then grow up knowing it in order for a true rationalist to be born within you. Or if, at any rate, a majority of the producible rationalists would be produced by that method. So it's possible that only a few people pick this up now, and then in five years I start seeing master rationalists coming out of the woodwork. That would be awesome.

(It is never far from my mind that I grew up knowing about lay transhumanism since age 11 and evolutionary psychology since 15. There are some things, like languages, that can never be real until a child grows up knowing them.)

There are some things, like languages, that can never be real until a child grows up knowing them.

I think it should be easy to verify that it's not really true (or at least you need to qualify this statement). I don't feel handicapped in perceiving English at all, even though I knew almost nothing before I was about 18. Now, I prefer thinking and writing in English. I have no reason to believe it's atypical.

I think Eliezer had the Creole-Pidgin phenomenon in mind with the language comment, but even ignoring that: if you didn't start learning English until you were 18, you almost certainly have an accent, and you always will; if you are a concert pianist, you almost certainly started as a child; if you are a world-class chess grandmaster, you almost certainly started as a child.

Then the questions we should ask ourselves are:

In the rare exceptions, is there anything different about the people involved? What similarities, if any, exist between people who were the rare exceptions? Are there methods for becoming the rare exceptions in each case? Are those methods generalizable outside of the specific context?

I'm not sure these fine procedural details are that important, in most crafts that don't feature such attention to detail and clarity of standards. The latter doesn't apply to the current state of the art in rationality, not by a long shot.

I don't feel handicapped in perceiving English at all, even though I knew almost nothing before I was about 18.

How many languages did you already know? I have a suspicion that the ease of learning an n-th language at the age of x increases with n when keeping x constant.

Synthetic languages don't turn into real languages until a child grows up knowing them; English is real because children have already grown up knowing it. See creole language.

Esperanto is a real language, despite the fact that only a small fraction of its speakers grow up learning it (and it would be just as real even without those individuals).

I see; this clearly required a qualification.

I don't think there is enough of quintessential knowledge to make something native of it, we'd better work on healthy synthetic community process for now.

Are there non-language examples of this? Did it do any good for educational practices, trying to pass the material through a generation of children learning it?

Can I suggest a moratorium on the use of the phrase "Art of rationality"? There are some serious language issues among this community that I believe may be clotting people's thought processes. The above post talks about developing "x-rationality" almost as something entirely parallel to science and mathematics, an entirely new field with limitless horizons. This might make sense if we conceive of rationality as the sort of thing one can possibly develop an art or science of. But I'm not sure this isn't a category error, much like the phrase "the Art of breathing" or the "Science of walking" would be. Has anyone shown that rationality is the sort of thing we can successively build upon, generation after generation? (Note that it's very important here to distinguish between advances in formalizations of rationality and advances in rationality itself.) We should iron these conceptual issues out, or at the very least, minimize the rhetorical flourishes for a bit.

[-]TAG20

F1. Develop general theory. F2. Refine and check for consistency and correctness. F3. Consolidate theory. N1. Apply existing theory to problems. N2. Evaluate successes and failures. GOTO F1.

This looks like a close relative of our trusty friend, the scientific method, and is similarly idealized. In terms of this process, I think the Less Wrong community is between F2 and F3.

Note that the steps above include loops and backtracking. In a fairly bad case scenario, the less wrong community is between F2 and F3, in a worse case it has had some failures and needs to abandon some approaches.

[-][anonymous]00

N0. Encounter problem intriguing enough to warrant explicit theory. F0. Develop subgeneral/tentative theory