shminux comments on [SEQ RERUN] Class Project - Less Wrong

2 Post author: MinibearRex 22 May 2012 04:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 22 May 2012 10:33:39PM 5 points [-]

There seems to be a serious misunderstanding here. (The current voting patterns are completely out of whack with what I expected.) I seem to have run into some inferential distance that I didn't realize existed. So let me try to be more detailed.

I would like to develop a social support structure for the-kind-of-people-who've-read-the-Sequences to pursue certain kinds of research outside of (existing) academia. Such a structure already exists, in the form of LW and SI, for some things (decision theory, and perhaps philosophy in general). I would like to see it extended to more things, including things that I happen to be interested in (but which aren't necessarily considered immediately world-saving by the SI crowd).

(Notice that I mentioned both SI and LW in the previous paragraph. These are different kinds of entities, and I mentioned them both for a reason: to indicate how broad the notion of "social support structure" that I have in mind here is.)

I thought it was conventional wisdom around here that certain kinds of productive intellectual work are not properly incentivized by standard academia, and that the latter systematically fails to teach certain important intellectual skills. This is, after all, kind of the whole point of the MWI sequence!

Frankly, I expected it to be obvious that we're not talking about anything as mundane as knowledge of Bayesian probability theory as a mathematical topic. Of course that isn't a secret, and "everyone" in standard science knows it. I'm talking about an ethos, a culture, where people talk like they do in this story:

"Too slow! If Einstein were in this classroom now, rather than Earth of the negative first century, I would rap his knuckles! You will not try to do as well as Einstein! You will aspire to do BETTER than Einstein or you may as well not bother!"

"Assume, Brennan, that it takes five whole minutes to think an original thought, rather than learning it from someone else. Does even a major scientific problem require 5760 distinct insights?"

There is a difference, as LW readers well know, between understanding Bayesian probability theory as a mathematical tool, and "getting" the ethos of x-rationality.

No one is talking about "applying" some kind of Bayesian statistical method to an unsolved problem and hoping to magically get the right answer. Explicit probability theory need not enter into it at all. The thing that would be "applied" is the LW culture -- where you're actually allowed to try to understand things.

This is not intended as a rebellious status-grab. Let me repeat that: this is not a status-grab. For now, it is simply a fun project to work on. I am not laying claim to a magical aura of destiny. (As a matter of fact, the very idea that you have a certain amount of status before you're allowed to work on important problems is itself one of the pathological assumptions of Traditional Science that the LW culture is specifically set up to avoid.)

Now, as for why no one has done this already: well, besides the "why", there is also the "who", the "what", the "where", and the "when". Who would have thought to try it before, and under what circumstances? As far as I know, EY intended this story as a parable, not as a concrete plan of action. To him and his colleagues at SI, the only really important problem is Friendly AI, and that (directly or indirectly) is what he's been spending his time on; other forms of mathematical and scientific research are mostly viewed as shiny distractions that tempt smart people away from their real duty, which is to save the universe. (Yes, this is a caricature, but it's true-as-a-caricature.) I take a somewhat different view, which may be due to a slightly different utility function, but in any case -- I think there is much to be gained by exploring these alternative paths.

Comment author: shminux 22 May 2012 10:52:17PM 4 points [-]

I'm talking about an ethos, a culture, where people talk like they do in this story:

That is what I meant, too.

Now, as for why no one has done this already: well, besides the "why", there is also the "who", the "what", the "where", and the "when". Who would have thought to try it before, and under what circumstances?

Some of those who read and believed the Class Project post 4 years ago.

To him and his colleagues at SI, the only really important problem is Friendly AI, and that (directly or indirectly) is what he's been spending his time on

And, given that it takers only "five whole minutes to think an original thought", how many thousands of original thoughts should he have come up with in 4 years? How many Einstein-style breakthroughs should he have made by now? How many has he?

Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified. Time to discard it?

Comment author: komponisto 22 May 2012 11:05:50PM 1 point [-]

Some of those who read and believed the Class Project post 4 years ago.

I read the post when it appeared 4 years ago, and I don't remember anyone saying "Hey, let's set up a community for people who've read Overcoming Bias to research quantum gravity!"

How many Einstein-style breakthroughs should [EY] have made by now? How many has he?

I don't really care to get into the usual argument about how much progress EY has made on FAI. As I've noted above, my own interests (for now) lie elsewhere.

Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified.

It was not intended as a prediction about his own research efforts over the next four years, as far as I know. Especially since his focus over that time has been on community-building rather than direct FAI research.

Comment author: shminux 22 May 2012 11:27:04PM *  3 points [-]

It was not intended as a prediction about his own research efforts over the next four years, as far as I know.

Yet it was, whether it was meant to or not. Surely he would be the first one to apply this marvelous approach?

Especially since his focus over that time has been on community-building rather than direct FAI research.

This is a rationalization, and you know it. He stated several times that he neglected SI to concentrate on research.

However, leaving the FAI research alone, I am rooting for your success. I certainly agree that a collaboration of like-minded people has a much better chance of success than any of them on their own, Bayes or no Bayes.

That is, I would like to see a subcommunity of LW devoted to researching mathematical and scientific problems independently of the current formal academic structure.

Well, being both outside the academia and not a complete novice in some fields of physics, I would love to get engaged in something like that, while learning the Bayesian way along the way. Whether there are others here in a similar position, I am not sure.

Comment author: [deleted] 22 May 2012 11:09:33PM 1 point [-]

Unless I misunderstand something, the Class Project post was a falsifiable model, and it has been falsified. Time to discard it?

It's a work of fiction, not a model.

Comment author: shminux 22 May 2012 11:30:16PM 4 points [-]

How about this: it was a falsifiable model disguised as a work of fiction?

Comment author: [deleted] 22 May 2012 11:39:27PM *  0 points [-]

The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn't sound.

EDIT: For what it's worth, this is also the same failure mode anti-Randists fall into when they try to criticize Objectivism after reading Fountainhead and/or Atlas Shrugged. It's actually much cleaner to construct a criticism from her non-fiction materials, but then one would have to tolerate her non-fiction...

Comment author: shminux 23 May 2012 12:04:00AM 4 points [-]

The falsifiable model of human behavior lurking beneath the fiction here was expounded in To Spread Science, Keep It Secret. Trying to refute that model using details in the work of fiction created to illustrate it isn't sound.

I don't see anything there about the Bayesian way being much more productive than "Eld science".

Comment author: David_Gerard 23 May 2012 10:51:46AM 2 points [-]

komponisto appears to be treating it in this discussion as a model, and I would assume that's the context shminux is speaking in.