Previously in series: Whining-Based Communities
"But there is a reason why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy. I expected much of them, and they came to expect much of themselves." —Jeffreyssai
Among the failure modes of martial arts dojos, I suspect, is that a sufficiently dedicated martial arts student, will dream of...
...becoming a teacher and having their own martial arts dojo someday.
To see what's wrong with this, imagine going to a class on literary criticism, falling in love with it, and dreaming of someday becoming a famous literary critic just like your professor, but never actually writing anything. Writers tend to look down on literary critics' understanding of the art form itself, for just this reason. (Orson Scott Card uses the analogy of a wine critic who listens to a wine-taster saying "This wine has a great bouquet", and goes off to tell their students "You've got to make sure your wine has a great bouquet". When the student asks, "How? Does it have anything to do with grapes?" the critic replies disdainfully, "That's for grape-growers! I teach wine.")
Similarly, I propose, no student of rationality should study with the purpose of becoming a rationality instructor in turn. You do that on Sundays, or full-time after you retire.
And to place a go stone blocking this failure mode, I propose a requirement that all rationality instructors must have secret identities. They must have a life outside the Bayesian Conspiracy, which would be worthy of respect even if they were not rationality instructors. And to enforce this, I suggest the rule:
Rationality_Respect1(Instructor) = min(Rationality_Respect0(Instructor), Non_Rationality_Respect0(Instructor))
That is, you can't respect someone as a rationality instructor, more than you would respect them if they were not rationality instructors.
Some notes:
• This doesn't set Rationality_Respect1 equal to Non_Rationality_Respect0. It establishes an upper bound. This doesn't mean you can find random awesome people and expect them to be able to teach you. Explicit, abstract, cross-domain understanding of rationality and the ability to teach it to others is, unfortunately, an additional discipline on top of domain-specific life success. Newton was a Christian etcetera. I'd rather hear what Laplace had to say about rationality—Laplace wasn't as famous as Newton, but Laplace was a great mathematician, physicist, and astronomer in his own right, and he was the one who said "I have no need of that hypothesis" (when Napoleon asked why Laplace's works on celestial mechanics did not mention God). So I would respect Laplace as a rationality instructor well above Newton, by the min() function given above.
• We should be generous about what counts as a secret identity outside the Bayesian Conspiracy. If it's something that outsiders do in fact see as impressive, then it's "outside" regardless of how much Bayesian content is in the job. An experimental psychologist who writes good papers on heuristics and biases, a successful trader who uses Bayesian algorithms, a well-selling author of a general-audiences popular book on atheism—all of these have worthy secret identities. None of this contradicts the spirit of being good at something besides rationality—no, not even the last, because writing books that sell is a further difficult skill! At the same time, you don't want to be too lax and start respecting the instructor's ability to put up probability-theory equations on the blackboard—it has to be visibly outside the walls of the dojo and nothing that could be systematized within the Conspiracy as a token requirement.
• Apart from this, I shall not try to specify what exactly is worthy of respect. A creative mind may have good reason to depart from any criterion I care to describe. I'll just stick with the idea that "Nice rationality instructor" should be bounded above by "Nice secret identity".
• But if the Bayesian Conspiracy is ever to populate itself with instructors, this criterion should not be too strict. A simple test to see whether you live inside an elite bubble is to ask yourself whether the percentage of PhD-bearers in your apparent world exceeds the 0.25% rate at which they are found in the general population. Being a math professor at a small university who has published a few original proofs, or a successful day trader who retired after five years to become an organic farmer, or a serial entrepreneur who lived through three failed startups before going back to a more ordinary job as a senior programmer—that's nothing to sneeze at. The vast majority of people go through their whole lives without being that interesting. Any of these three would have some tales to tell of real-world use, on Sundays at the small rationality dojo where they were instructors. What I'm trying to say here is: don't demand that everyone be Robin Hanson in their secret identity, that is setting the bar too high. Selective reporting makes it seem that fantastically high-achieving people have a far higher relative frequency than their real occurrence. So if you ask for your rationality instructor to be as interesting as the sort of people you read about in the newspapers—and a master rationalist on top of that—and a good teacher on top of that—then you're going to have to join one of three famous dojos in New York, or something. But you don't want to be too lax and start respecting things that others wouldn't respect if they weren't specially looking for reasons to praise the instructor. "Having a good secret identity" should require way more effort than anything that could become a token requirement.
Now I put to you: If the instructors all have real-world anecdotes to tell of using their knowledge, and all of the students know that the desirable career path can't just be to become a rationality instructor, doesn't that sound healthier?
Part of the sequence The Craft and the Community
Next post: "Beware of Other-Optimizing"
Previous post: "Whining-Based Communities"
Let it be known that I, Wedrifid, at this time and at this electronic location do declare that I do not believe that Wei Dai has conscious or unconscious motives to sabotage lesswrong. Indeed the thought is so bizarre and improbable that it was never even considered as a possibility by my search algorithm until Wei brought it up.
It really seems much more likely to me that Wei really did think that chastising those who tried to prevent the feeding of Dmytry was going to help the website rather than damage it. I also believe that Wei Dai declaring war on "Fictional" as a response to "What do you call the Joker?" is based on a true, sincere and evidently heartfelt belief that the world would be a better place without "fictional" (or analogous answers) as a reply in similar contexts.
Enemies are almost never innately evil. (Another probably necessary caveat: That word selection is merely a reference to a post that contains the relevant insight. Actual enemy status is not something to be granted so frivolously. Actively considering agents enemies rather than merely obstacles involves a potentially significant trade-off when it comes to optimization and resource allocation and so is best reserved for things that really matter.)