Today's post, Class Project was originally published on 31 May 2008. A summary (taken from the LW wiki):
From the world of Initiation Ceremony. Brennan and the others are faced with their midterm exams.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Einstein's Superpowers, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
There seems to be a serious misunderstanding here. (The current voting patterns are completely out of whack with what I expected.) I seem to have run into some inferential distance that I didn't realize existed. So let me try to be more detailed.
I would like to develop a social support structure for the-kind-of-people-who've-read-the-Sequences to pursue certain kinds of research outside of (existing) academia. Such a structure already exists, in the form of LW and SI, for some things (decision theory, and perhaps philosophy in general). I would like to see it extended to more things, including things that I happen to be interested in (but which aren't necessarily considered immediately world-saving by the SI crowd).
(Notice that I mentioned both SI and LW in the previous paragraph. These are different kinds of entities, and I mentioned them both for a reason: to indicate how broad the notion of "social support structure" that I have in mind here is.)
I thought it was conventional wisdom around here that certain kinds of productive intellectual work are not properly incentivized by standard academia, and that the latter systematically fails to teach certain important intellectual skills. This is, after all, kind of the whole point of the MWI sequence!
Frankly, I expected it to be obvious that we're not talking about anything as mundane as knowledge of Bayesian probability theory as a mathematical topic. Of course that isn't a secret, and "everyone" in standard science knows it. I'm talking about an ethos, a culture, where people talk like they do in this story:
There is a difference, as LW readers well know, between understanding Bayesian probability theory as a mathematical tool, and "getting" the ethos of x-rationality.
No one is talking about "applying" some kind of Bayesian statistical method to an unsolved problem and hoping to magically get the right answer. Explicit probability theory need not enter into it at all. The thing that would be "applied" is the LW culture -- where you're actually allowed to try to understand things.
This is not intended as a rebellious status-grab. Let me repeat that: this is not a status-grab. For now, it is simply a fun project to work on. I am not laying claim to a magical aura of destiny. (As a matter of fact, the very idea that you have a certain amount of status before you're allowed to work on important problems is itself one of the pathological assumptions of Traditional Science that the LW culture is specifically set up to avoid.)
Now, as for why no one has done this already: well, besides the "why", there is also the "who", the "what", the "where", and the "when". Who would have thought to try it before, and under what circumstances? As far as I know, EY intended this story as a parable, not as a concrete plan of action. To him and his colleagues at SI, the only really important problem is Friendly AI, and that (directly or indirectly) is what he's been spending his time on; other forms of mathematical and scientific research are mostly viewed as shiny distractions that tempt smart people away from their real duty, which is to save the universe. (Yes, this is a caricature, but it's true-as-a-caricature.) I take a somewhat different view, which may be due to a slightly different utility function, but in any case -- I think there is much to be gained by exploring these alternative paths.
One problem with this approach is that the existing academia has access to all kinds of useful lab equipment, up to and including the Large Hadron Collider. It would be very difficult for a group of enthusiasts to acquire that kind of equipment; and, without it, it's hard to do any truly revolutionary research.