Vaniver comments on What I've learned from Less Wrong - Less Wrong

79 Post author: Louie 20 November 2010 12:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 20 November 2010 06:17:16PM *  31 points [-]

LW has helped me a lot. Not in matters of finding the truth; you can be a good researcher without reading LW, as the whole history of science shows. (More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?) No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.

I believe that Eliezer has succeeded in creating, and communicating through the Sequences, a valuable technique for seeing through words to their meanings and trying to think correctly about those instead. When you do that, you inevitably notice how much of what you considered to be "meanings" is actually yay/boo reactions, or cached conclusions, or just fine mist that dissolves when you look at it closely. Normal folks think that the question about a tree falling in the forest is kinda useless; nerdy folks suppress their flinch reaction and get confused instead; extra nerdy folks know exactly why the question is useless. Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive. I liked reading Moldbug before LW. Now I find him... occasionally entertaining, I guess?

Better people than I are already turning this into a sort of martial art. Look at Yvain cutting down ten guys with one swoop, and then try to tell me LW isn't useful!

Comment author: Louie 21 November 2010 12:17:36AM 14 points [-]

(More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?)

Saying that "Having incorrect views isn't that crippling, look at Scott Aaronson!" is a bit like saying "Having muscular dystrophy isn't that crippling, look at Stephen Hawking!" It's hard to learn much by generalizing from the most brilliant, hardest working, most diplomatically-humble man in the world with a particular disability. I know they're both still human, but it's much harder to measure how much incorrect views hurt the most brilliant minds. Who would you measure them against to show how much they're under-performing their potential?

Incidentally, knowing Scott Aaronson, and watching that Blogging Heads video in particular was how I found out about SIAI and Less Wrong in the first place.

Comment author: cousin_it 21 November 2010 05:39:53AM *  10 points [-]

How would Aaronson benefit from believing in MWI, over and above knowing that it's a valid interpretation?

Comment author: Louie 21 November 2010 01:08:13PM *  0 points [-]

Upvoted. This is definitely the right question to ask here... thanks for reminding me.

I hesitate to speculate on what gaps exist in Scott Aaronson's knowledge. His command of QM and complexity theory greatly exceed mine.

[...]

OK hesitation over. I will now proceed to impertinently speculate on possible gaps in Scott Aaronson's knowledge and their implications!

Assuming he still believes that collapse postulate theories of QM are equally plausible to Many Worlds, I could say that he might not appreciate the complexity penalty that collapse theories require... except Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English? I know my mind doesn't automatically do this and it's not a habit that most people have. Another possibility is that perhaps it's not obvious to him that Occam's razor should apply this broadly? So these would point to limitations in more fundamental layers of his scientific thinking ability. This could lead to him having trouble telling good new theories to spend time investigating from bad ones... or make forming compact representations for his own research findings more difficult. He consequently discovers less, more slowly, and describes what he discovers less well.

OK... wild speculation complete!

My actual take has always been that he probably understands things correctly in QM but is just exceedingly well-mannered and diplomatic with his academic colleagues. Even if he felt Many Worlds was now a more sound theory, he would probably avoid being a blow-hard about it. He doesn't need to ruffle his buddies' feathers -- he has to work with these guys, go to conferences with them, and have his papers reviewed by them. Also, he may know it's pointless to get others to switch to a new interpretation if they don't see the fundamental reason why it's right to switch. And the arguments needed to convince others have inference chains too long to present in most venues.

Comment author: AnnaSalamon 22 November 2010 11:31:18AM *  7 points [-]

Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English?

Just to be clear: there are two unrelated notions of "complexity" blurred together in the above comment. The Complexity Zoo discusses computational complexity theory -- it discusses how the run-time of an algorithm scales with algorithm's inputs (and thereby classes algorithms into P, EXPTIME, etc.).

Kolmogorov Complexity is unrelated: it is the minimum number of bits (in some fixed universal programming language) required to represent a given algorithm. Eliezer's argument for MWI rests on Komogorov complexity and has nothing to do with computational complexity theory.

I'm sure Scort Aarsonson is familiar with both, of course; I just want to make sure LWers aren't confused about it.

Comment author: XiXiDu 22 November 2010 11:45:51AM *  0 points [-]

Complexity is mentioned very often on LW but there is no post that works out the different notions?

Comment author: CarlShulman 22 November 2010 02:36:57PM *  3 points [-]
Comment author: timtyler 23 November 2010 09:10:01PM 0 points [-]