cousin_it comments on What I've learned from Less Wrong - Less Wrong

79 Post author: Louie 20 November 2010 12:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

Sort By: Controversial

You are viewing a single comment's thread.

Comment author: cousin_it 20 November 2010 06:17:16PM *  31 points [-]

LW has helped me a lot. Not in matters of finding the truth; you can be a good researcher without reading LW, as the whole history of science shows. (More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?) No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.

I believe that Eliezer has succeeded in creating, and communicating through the Sequences, a valuable technique for seeing through words to their meanings and trying to think correctly about those instead. When you do that, you inevitably notice how much of what you considered to be "meanings" is actually yay/boo reactions, or cached conclusions, or just fine mist that dissolves when you look at it closely. Normal folks think that the question about a tree falling in the forest is kinda useless; nerdy folks suppress their flinch reaction and get confused instead; extra nerdy folks know exactly why the question is useless. Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive. I liked reading Moldbug before LW. Now I find him... occasionally entertaining, I guess?

Better people than I are already turning this into a sort of martial art. Look at Yvain cutting down ten guys with one swoop, and then try to tell me LW isn't useful!

Comment author: XiXiDu 20 November 2010 07:49:32PM *  3 points [-]

I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind? It seems naturally to me think that way, the post states what I always thought but was never able to express that clearly, that's why I like it. The problem is, how do we get people to read it who disagree? I've recently introduced a neuroscientist to Less Wrong via that post. He read it and agreed with everything. Then he said it's naive to think that this will be adopted any time soon. What he meant is that all this wit is useless if we don't get the right people to digest it. Not people like us who agree anyway, probably before ever reading that post in the first place.

Regarding Eliezers post I even have my doubts that it is very useful given confused nerdy folks. The gist of that post seems to be that people should pinpoint their disagreements before one talks at cross-purposes. But it gives the impression that propositional assertions do not yield sensory experience. Yet human agents are physical systems just as trees. If you tell them certain things you can expect certain reactions. I believe that article might be inconsistent with other assertions made in this community like taking logical implications of general beliefs serious. The belief that the decimal expansion of Pi is infinite will never pay rent in future anticipations.

I'm also skeptic about another point in the original post, namely that most people’s beliefs aren’t worth considering. This I believe might be conterproductive. Consider that most people express this attitude towards existential risks from artificial intelligence. So if you link up people to that one post, out of context and then they hear about the SIAI, what might they conclude if they take that post serious?

The point about truth is another problematic idea. I really enjoyed The Simple Truth, but in the light of all else I've come across I'm not convinced that truth is a useful term to adopt anywhere but in the most informal discussions. If you are like me and grew up in a religious environment you are told that there exist absolute truth. Then if you have your doubts and start to learn more you are told that skepticism is an epistemological position, and ‘there is no truth-there is truth’ are metaphysical/linguistic positions. When you learn even more and come across concepts like the uncertainty principle, Gödel's incompleteness theorems, halting problem or Tarski’s Truth Theorem the nature of truth becomes even more uncertain. Digging even deeper won't revive the naive view of truth either. And that is just the tip of the iceberg, as you will see once your learn about Solomonoff induction and Minimum Message Length.

ETA Fixed the formatting. My last paragraph was eaten before!

Comment author: wedrifid 20 November 2010 09:18:54PM 5 points [-]

It seems naturally to me think that way, the post states what I always thought but was never able to express that clearly, that's why I like it

The best essays will usually leave you with that impression. As will the best teachers.

Comment author: David_Gerard 20 November 2010 10:08:06PM *  12 points [-]

Be careful. So will the less-than-best essays and teachers. It's a form of hindsight bias: you think this thing is obvious, but your thoughts were actually quite inchoate before that. A meme - particularly a parasitic meme - can get itself a privileged position in your head by feeding your biases to make itself look good, e.g. your hindsight bias.

When you see a new idea and you feel your eyes light up, that’s the time to put it in a sandbox - yes, thinking a meme is brilliant is a bias to be cautious of. You need to know how to take the thing that gave you that "click!" feeling and evaluate it thoroughly and mercilessly.

(I'm working on a post or two on the subject area of dangerous memes and what to do about them.)

Comment author: bbleeker 24 November 2010 12:53:37PM *  3 points [-]

(I'm working on a post or two on the subject area of dangerous memes and what to do about them.)

I'm very interested in that, I think I need it. I just read this article about Mere Christianity by C. S. Lewis, and I was like "what the hell is wrong with me, that I didn't see at least some of those points myself?" It really scared me, and made me wonder what other nonsense I believe in, that I ought to have seen through right away...

Comment author: David_Gerard 24 November 2010 02:40:35PM 1 point [-]

The hard part with something like that not being how to question your ideas, but to notice that you have an idea that needs questioning. It's like reading Michael Behe's books on intelligent design and trying to understand the view inside his head, how a tenured biology professor could come up with such obvious-to-others defective arguments and fail to notice the low quality of his own thinking.

Comment author: wedrifid 27 November 2010 01:01:41AM 3 points [-]

I'm very interested in that, I think I need it. I just read this article about Mere Christianity by C. S. Lewis, and I was like "what the hell is wrong with me, that I didn't see at least some of those points myself?"

The strength of C. S. Lewis's works seem to be that they were a whole lot less bad than the alternate sources of the same message.

Comment author: NancyLebovitz 24 November 2010 04:24:51PM 6 points [-]

It might be worth doing some analysis on the authoritative voice (the ability to sound right), and I speak as someone who's been a CS Lewis, GK Chesterton, Heinlein, Rand, and Spider Robinson fan. At this point, I suspect it's a pathology.

Comment author: David_Gerard 26 November 2010 08:26:32PM 4 points [-]

Dude. AN ASSERTION IS PROVEN BY SOUNDING GOOD. It's a form of the Steve Jobs reality distortion superpower: come up with a viewpoint so compelling it will reshape people's perception of the past as well as the present.

(I must note that I'm not actually advocating this.)

Argument by assertion amusement from my daughter: "I'm running around the kitchen, but I'm not being annoying by running around the kitchen." An argument by assertion of rich depth, particularly from a three-year-old.

Comment author: ciphergoth 27 November 2010 02:18:02PM 1 point [-]

Did you ever get around to reading either of the papers I linked you to there btw?

Comment author: David_Gerard 27 November 2010 05:00:04PM *  0 points [-]

Nuh. Still in the Pile(tm) with yer talk, which I have watched the first 5 min of ... I hate video so much.

Did you dislike your talk's content or your presentation? So far it looks like something that should be turned into a series of blog posts, complete with diagrams.

Comment author: ciphergoth 27 November 2010 05:08:17PM 0 points [-]

Neither really, it's the video itself I dislike. I've put the slides on Scribd, and I'm thinking of re-recording the soundtrack. Only trouble is, I'd have to watch the video first to remember what I said... and I hate video so much.

Comment author: Blueberry 28 March 2012 03:39:43AM 1 point [-]

This was over a year ago but I see that you're still around. I wanted to ask you more about this. How does Spider Robinson fit in with the others? I would also add Orwell, Kipling, and Christopher Hitchens. Maybe even Eliezer a bit.

A big part of it is that these authors talk about truth a lot and the harm of denying that it's there, and rail against and strawman other groups for refusing to accept the truth or even that truth exists.

What do you mean by a pathology? You think there was something wrong with those authors? Are you talking about overconfidence?

Comment author: NancyLebovitz 28 March 2012 03:58:49AM 1 point [-]

Spider Robinson is very definite and explicit about how things ought to be. Unfortunately, he extends this to the idea that people who are worth knowing like good jazz, Irish coffee, and puns.

I meant that there may be a pathology at my end-- being so fond of the authoritative voice that I could be a fan of writers with substantially incompatible ideas, and not exactly notice or care.

Comment author: Blueberry 28 March 2012 04:58:12AM *  1 point [-]

I suspect you may be reading his exaggerated enthusiasm for these things as a blanket statement about people who aren't worth knowing. For instance, I might, in a burst of excitement, say that people who don't like the song Waterfall aren't worth talking to, but I wouldn't mean it literally. It would be a figure of speech.

For instance, in one of the Callahan books he states (in the voice of the author, not as a character, IIRC) that if he had a large sum of money he'd buy everyone in the US a copy of "Running, Jumping, Standing Still" on CD because it would make the world so much better. I read this as hyperbole for how much he likes that CD, and I don't take it literally.

I may be misremembering or have missed something in his writing, though.

As far as you liking the voice, I doubt it's a pathology. I feel the same way you do and it's not surprising to me that a lot of people would find that kind of objectivity and confidence appealing. It is a bias, if you confuse the pleasure of reading those writers with their actual ideas, but since I vehemently disagree with most of the above writers I'm not too worried about it. (Do you still read or like those writers?)

Comment author: NancyLebovitz 28 March 2012 09:23:50AM *  1 point [-]

I recently started rereading Atlas Shrugged, and was having fun with it-- no matter what else, Rand created a world where interesting things happen. It was also interesting because some things have changed. Her bad guy rich people were bad because they were slack-- they weren't interested in running their businesses, they had barely enough energy to get government favors. The modern type who's energetically taking as much money as possible out of the business with the intent of going somewhere else is barely present.

I can't stand Robinson any more. The tone of "we're cooler than the mundanes" has revolted me to the point where even the milder earlier version gets on my nerves. It's possible that I should give Stardance another chance some time. It's also possible that the effects of Very Bad Dreams have faded. Robinson has a sadistic imagination.

Back when, I bought a copy of Running Jumping Standing Still when I happened to see it, and was annoyed to find that I liked it.

I reread "Magic, Inc." recently, and liked it very much. I haven't read much Lewis or Chesterton lately.

My concern about pathology is a suspicion that what I like is the comfort of being told what to think in a palatable way.

I obviously haven't completely lost my taste for didactic fiction.

Comment author: bbleeker 25 November 2010 03:11:55PM *  1 point [-]

Hm, I'm a fan of Heinlein too, I guess I'd better not start reading those others. ;p Any idea where I can look for clues about the 'authoritative voice'?

Comment author: Eliezer_Yudkowsky 28 March 2012 05:48:06AM 0 points [-]

That's odd. I've been a fan of Heinlein and Spider Robinson but never Rand or Lewis. Haven't tried Chesterton.

Comment author: Blueberry 28 March 2012 06:00:48AM 0 points [-]

You're actually the reason I started reading Spider Robinson.

Comment author: Vladimir_Nesov 23 November 2010 10:07:37PM 2 points [-]

Be careful. So will the less-than-best essays and teachers. It's a form of hindsight bias: you think this thing is obvious, but your thoughts were actually quite inchoate before that.

Given a clear explanation, it's more probably correct than secretly wrong. We don't live in a world dominated by true-sounding lies. Incorrect things should be generally more surprising than correct things, even if there are exceptions.

(It's confirmation bias, not hindsight bias. Hindsight bias is overestimation of prior probability upon observing a positive instance of an event.)

Comment author: wedrifid 20 November 2010 10:57:38PM *  9 points [-]

Be careful. So will the less-than-best essays and teachers.

Less often. Learning bullshit is more likely to come with the impression that you are gaining sophistication. If something is so banal as to be straightforward and reasonable you gain little status by knowing it.

Yes, people have biases and believe silly things but things seeming obvious is not a bad sign at all. I say evaluate mercilessly those things that feel deep and leave you feeling smug that you 'get it'. 'Clicking' is no guarantee of sanity but it is better than learning without clicking.

Comment author: David_Gerard 20 November 2010 11:46:23PM *  5 points [-]

Yes, I suspect I'm being over-cautious having been thinking about memetic toxic waste quite a lot of late. This suggests that when I'm describing the scary stuff in detail, I'll have to take care not to actually scare people out of both neophilia and decompartmentalisation.

That said, I recall the time I was out trolling the Scientologists and watched someone's face light up that way as she was being sold a copy of Dianetics and a communication course. She certainly seemed to be getting that feeling. Predatory memes - they're rare, but they exist.

Comment author: wedrifid 21 November 2010 01:32:19AM 3 points [-]

That said, I recall the time I was out trolling the Scientologists and watched someone's face light up that way as she was being sold a copy of Dianetics and a communication course. She certainly seemed to be getting that feeling. Predatory memes - they're rare, but they exist.

Scary indeed. I suspect what we are each 'vulnerable' to will vary quite a lot from person to person.

Comment author: David_Gerard 21 November 2010 01:48:38AM *  13 points [-]

Yes. I do think that a particularly dangerous attitude to memetic infections on the Scientology level is an incredulous "how could they be that stupid?" Because, of course, it contains an implicit "I could never be that stupid" and "poor victim, I am of course far more rational". This just means your mind - in the context of being a general-purpose operating system that runs memes - does not have that particular vulnerability.

I suspect you will have a different vulnerability. It is not possible to completely analyse the safety of an arbitrary incoming meme before running it as root; and there isn't any such thing as a perfect sandbox to test it in. Even for a theoretically immaculate perfectly spherical rationalist of uniform density, this may be equivalent to the halting problem.

My message is: it can happen to you, and thinking it can't is more dangerous than nothing. Here are some defences against the dark arts.

[That's the thing I'm working on. Thankfully, the commonest delusion seems to be "it can't happen to me", so merely scaring people out of that will considerably decrease their vulnerability and remind them to think about their thinking.]

This sort of thing makes me hope that the friendly AI designers are thinking like OpenBSD-level security researchers. And frankly, they need Bruce Schneier and Ed Felten and Dan Bernstein and Theo deRaadt on the job. We can't design a program not to have bugs - just not to have ones that we know about. As a subset of that, we can't design a constructed intelligence not to have cognitive biases - just not to have ones that we know about. And predatory memes evolve, rather than being designed from scratch. I'd just like you to picture a superintelligent AI catching the superintelligent equivalent of Scientology.

Comment author: wedrifid 21 November 2010 10:18:20AM 6 points [-]

My message is: it can happen to you, and thinking it can't is more dangerous than nothing.

With the balancing message: Some people are a lot less vulnerable to believing bullshit than others. For many on lesswrong their brains are biassed relative to the population towards devoting resources to bullshit prevention at the expense of engaging in optimal signalling. For these people actively focussing on second guessing themselves is a dangerous waste of time and effort.

Sometimes you are just more rational and pretending that you are not is humble but not rational or practical.

Comment author: David_Gerard 21 November 2010 11:02:09AM *  1 point [-]

I can see that I've failed to convince you and I need to do better.

In my experience, the sort of thing you've written is a longer version of "It can't happen to me, I'm far too smart for that" and a quite typical reaction to the notion that you, yes you, might have security holes. I don't expect you to like that, but it is.

You really aren't running OpenBSD with those less rational people running Windows.

I do think being able to make such statements of confidence in one's immunity takes more detailed domain knowledge. Perhaps you are more immune and have knowledge and experience - but that isn't what you said.

I am curious as to the specific basis you have for considering yourself more immune. Not just "I am more rational", but something that's actually put it to a test?

Put it this way, I have knowledge and experience of this stuff and I bother second-guessing myself.

(I can see that this bit is going to have to address the standard objection more.)

Comment author: CronoDAS 21 November 2010 05:11:32AM 2 points [-]

Regarding Scientology, I had the impression that they usually portray themselves to those they're trying to recruit as being like a self-help community ("we're like therapists or Tony Robbins, except that our techniques actually work!") before they start sucking you into the crazy?

Comment author: wedrifid 21 November 2010 10:12:37AM 1 point [-]

Wait... did you just use Tony Robbins as the alternative to being sucked into the crazy?

Comment author: Vladimir_Nesov 20 November 2010 07:59:10PM *  19 points [-]

I wonder if the main reason for why a post like Yvain's is upvoted is not because it is great but because everyone who reads it instantly agrees. Of course it is great in the sense that it sums up the issue in a very clear and concise manner. But has it really changed your mind?

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally). The progress is made by putting such arguments into words, to be followed by other people faster and more reliably than they were arrived at, even if arriving at them is in some contexts almost inevitable.

Additionally, clarity offered by a carefully thought-through exposition isn't something to expect without a targeted effort. This clarity can well serve as the enabling factor for making the next step.

Comment author: patrissimo 15 December 2010 05:03:44AM 4 points [-]

"That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally)."

Also how great propaganda works.

If you are going to describe a "great argument" I think you need to put more emphasis on it being tied to the truth rather than being agreeable. I would say truly great arguments tend not to be agreeable, b/c the real world is so complex that descriptions without lots of nuance and caveats are pretty much always wrong. Whereas simplicity is highly appealing and has a low cognitive processing cost.

Comment author: shokwave 15 December 2010 06:54:04AM 2 points [-]

put more emphasis on it being tied to the truth rather than being agreeable.

Oh. I only agree with argument steps that are truthful.

Comment author: [deleted] 21 November 2010 01:39:03PM 4 points [-]

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally).

There are nevertheless also conclusions that you agreed with all along. Sometimes hindsight bias makes you think you agreed all along when you really didn't. But other times you genuinely agreed all along.

You can skip to the end of Yvain's post (the one referenced here) and read the summary - assuming you haven't read the post already. Specifically, this statement: "We should blame and stigmatize people for conditions where blame and stigma are the most useful methods for curing or preventing the condition, and we should allow patients to seek treatment whenever it is available and effective." If you agree with this statement without first reading Yvain's argument for it, then that's evidence that you already agreed with Yvain's conclusions without needing to be led gradually step by step through his long argument.

Comment author: shokwave 21 November 2010 09:59:06AM 6 points [-]

That's how great arguments work: you agree with every step (and after a while you start believing things you didn't originally).

And to avoid people giving in to their motivated cognition, you present the steps in order, and the conclusion at the end. To paraphrase Yudkowsky's explanation of Bayes Theorem:

By this point, conclusion may seem blatantly obvious or even tautological, rather than exciting and new. If so, this argument has entirely succeeded in its purpose.

This method of presenting great arguments is probably the most important thing I learned from philosophy, incidentally.

Comment author: Vladimir_M 21 November 2010 09:21:03AM *  25 points [-]

cousin_it:

Normal folks don't let politics overtake their mind; concerned folks get into huge flamewars; but we know exactly why this is counterproductive.

Trouble is, the question still remains open: how to understand politics so that you're reasonably sure that you've grasped its implications on your personal life and destiny well enough? Too often, LW participants seem to me like they take it for granted that throughout the Western world, something resembling the modern U.S. regime will continue into indefinite future, all until a technological singularity kicks in. But this seems to me like a completely unwarranted assumption, and if it turns out to be false, then the ability to understand where the present political system is heading and plan for the consequences will be a highly valuable intellectual asset -- something that a self-proclaimed "rationalist" should definitely take into account.

Now, for full disclosure, there are many reasons why I could be biased about this. I lived through a time and place -- late 1980s and early 1990s in ex-Yugoslavia -- where most people were blissfully unaware of the storm that was just beyond the horizon, even though any cool-headed objective observer should have been able to foresee it. My own life was very negatively affected by my family's inability to understand the situation before all hell broke loose. This has perhaps made me so paranoid that I'm unable to understand why the present political situation in the Western world is guaranteed to be so stable that I can safely forget about it. Yet I still have to see some arguments for this conclusion that would pass the standards that LW people normally apply to other topics.

Comment author: cousin_it 21 November 2010 10:11:05PM *  1 point [-]

Your comment is an instance of the "forcing fallacy" which really deserves a post of its own: claiming that we should spend resources on a problem because a lot of utility depends, or could depend, on the answer. There are many examples of this on LW, but to choose an uncontroversial one from elsewhere: why aren't more physicists working on teleportation? The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all. I don't see a viable attack for applying LW-style rationality to political prediction, do you?

Comment author: Vladimir_Nesov 21 November 2010 10:35:42PM *  4 points [-]

The general counter to the pattern is noting that problems may be difficult, and may or may not have viable attacks right now, so we may be better off ignoring them after all.

This is valid where there are experts that can confidently estimate that there are no attacks. There are lots of expert physicists, so if steps towards teleportation were feasible, someone would've noticed. In case there are no experts to produce such confidence, correct course of action is to create them (perhaps from more general experts, by way of giving a research focus).

The rule "If it's an important problem, and we haven't tried to understand it, we should" holds in any case, it's just that in case of teleportation, we already did try to understand what we presently can, as a side effect of widespread knowledge of physics.

Comment author: MichaelVassar 21 November 2010 05:42:06PM 14 points [-]

I agree with you on this, but honestly, its a difficult enough topic that semi-specialists are needed. Trying as a non-specialist to figure out how stable your political system is rather than trying to find a specialist you can trust will get you about as far as it would in law etc.

Comment author: wedrifid 30 November 2010 01:32:03AM 1 point [-]

Trickier than the 'how stable' question is that of what is likely to result from a failure. To the extent that such knowledge is missing the problem of what to do about it gains faint hints reminiscent of Pascal's Mugging.

Comment author: NancyLebovitz 30 November 2010 12:22:44AM 0 points [-]

That sounds plausible, but should probably have a time frame added.

Comment author: [deleted] 21 November 2010 12:37:18PM 2 points [-]

Now, for full disclosure, there are many reasons why I could be biased about this.

With emphasis on "could be" as opposed to "am". Different past experiences leading to different conclusions isn't necessarily "bias". This is a bit of a pet peeve of mine. I often see the naive, the inexperienced, quite often the young, dismiss the views of the more experienced as "biased" or by some broad synonym.

The implicit reasoning seems to be as follows: "Here is the evidence. The evidence plus a uniform prior distribution leads to conclusion A. Yet this person sees the evidence and draws conclusion B different from A. Therefore he is letting his biases affect his judgment."

One problem with the reasoning is that "the evidence" is not the (only) evidence. There is, rather, "evidence I'm aware of" and "evidence I'm not aware of but the other person might be aware of". It's entirely possible for that other evidence to be decisive.

Comment author: CBHacking 07 December 2014 12:03:00PM 0 points [-]

This is one of the reasons I actually rather like the politics in Heinlein's writing; while it occasionally sounds preachy, and I routinely disagree with the implicit statement that the proposed system has higher utility than current ones, it does expose some really interesting ideas. This has led me to wonder, on occasion, about other potential government systems and to attempt to determine their utility compared to what we have.

Of course, I'm not really a student of political science and therefore am ill-equipped for this purpose, and estimate insufficient utility to attempting to undertake the scholarship needed to correct this (mostly due to opportunity cost; I am active in a field where I can contribute significant utility today, and it's more efficient to update and expand my knowledge there than to branch into a completely different field in any depth). Nonetheless, inefficient though it may be, it's an open question that I find my mind wandering to on occasion.

The conclusion I've reached is that if the US government (as we currently recognize it) continues until the technological singularity, it will be because the singularity comes soon (requires within ~50 years at a low-confidence estimate, at 150 years I'm 90% confident the US government either won't exist or won't be recognizable). There are too many problems with the system; it wasn't optimized for the modern world, to the extent was optimized at all, and of course the "modern world" keeps advancing too. The US has tried to keep up (universal adult suffrage, several major changes to how political parties are organized (nobody today seriously proposes a split ticket), the increasing authority of the federal government over the states, etc.) but such change is reactive and takes time. It will always lag behind the bleeding edge, and if it gets too far behind the then-current institution will either be overthrown or will lose its significance and become something like the 21st century's serious implementations of the feudal system (rare, somewhat different from how it was a few hundred years back, and nonetheless mostly irrelevant).

Comment author: Louie 21 November 2010 12:17:36AM 14 points [-]

(More disturbingly, you can be a good researcher of QM stuff, read LW, disagree with Eliezer about MWI, have a good chance of being wrong, and not be crippled by that in the least! Huh? Wasn't it supposed to be all-important to have the right betting odds?)

Saying that "Having incorrect views isn't that crippling, look at Scott Aaronson!" is a bit like saying "Having muscular dystrophy isn't that crippling, look at Stephen Hawking!" It's hard to learn much by generalizing from the most brilliant, hardest working, most diplomatically-humble man in the world with a particular disability. I know they're both still human, but it's much harder to measure how much incorrect views hurt the most brilliant minds. Who would you measure them against to show how much they're under-performing their potential?

Incidentally, knowing Scott Aaronson, and watching that Blogging Heads video in particular was how I found out about SIAI and Less Wrong in the first place.

Comment author: cousin_it 21 November 2010 05:39:53AM *  10 points [-]

How would Aaronson benefit from believing in MWI, over and above knowing that it's a valid interpretation?

Comment author: Louie 21 November 2010 01:08:13PM *  0 points [-]

Upvoted. This is definitely the right question to ask here... thanks for reminding me.

I hesitate to speculate on what gaps exist in Scott Aaronson's knowledge. His command of QM and complexity theory greatly exceed mine.

[...]

OK hesitation over. I will now proceed to impertinently speculate on possible gaps in Scott Aaronson's knowledge and their implications!

Assuming he still believes that collapse postulate theories of QM are equally plausible to Many Worlds, I could say that he might not appreciate the complexity penalty that collapse theories require... except Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English? I know my mind doesn't automatically do this and it's not a habit that most people have. Another possibility is that perhaps it's not obvious to him that Occam's razor should apply this broadly? So these would point to limitations in more fundamental layers of his scientific thinking ability. This could lead to him having trouble telling good new theories to spend time investigating from bad ones... or make forming compact representations for his own research findings more difficult. He consequently discovers less, more slowly, and describes what he discovers less well.

OK... wild speculation complete!

My actual take has always been that he probably understands things correctly in QM but is just exceedingly well-mannered and diplomatic with his academic colleagues. Even if he felt Many Worlds was now a more sound theory, he would probably avoid being a blow-hard about it. He doesn't need to ruffle his buddies' feathers -- he has to work with these guys, go to conferences with them, and have his papers reviewed by them. Also, he may know it's pointless to get others to switch to a new interpretation if they don't see the fundamental reason why it's right to switch. And the arguments needed to convince others have inference chains too long to present in most venues.

Comment author: AnnaSalamon 22 November 2010 11:31:18AM *  7 points [-]

Scott Aaronson is the Head Zookeeper of the Complexity Zoo! So he knows about complexity classes and calculating complexity of algorithms inside out. Perhaps this knowledge doesn't help him naturally calculate the informational complexity of the parts of scientific theories that are phrased in natural languages like English?

Just to be clear: there are two unrelated notions of "complexity" blurred together in the above comment. The Complexity Zoo discusses computational complexity theory -- it discusses how the run-time of an algorithm scales with algorithm's inputs (and thereby classes algorithms into P, EXPTIME, etc.).

Kolmogorov Complexity is unrelated: it is the minimum number of bits (in some fixed universal programming language) required to represent a given algorithm. Eliezer's argument for MWI rests on Komogorov complexity and has nothing to do with computational complexity theory.

I'm sure Scort Aarsonson is familiar with both, of course; I just want to make sure LWers aren't confused about it.

Comment author: XiXiDu 22 November 2010 11:45:51AM *  0 points [-]

Complexity is mentioned very often on LW but there is no post that works out the different notions?

Comment author: CarlShulman 22 November 2010 02:36:57PM *  3 points [-]
Comment author: timtyler 23 November 2010 09:10:01PM 0 points [-]
Comment author: wedrifid 20 November 2010 09:16:24PM 4 points [-]

No; for me LW is mostly useful for noticing bullshit and cutting it away from my thoughts. When LW says someone's wrong, we may or may not be right; but when LW says someone's saying bullshit, we're probably right.

I couldn't agree more. The "extra nerdy folks know exactly why the question is useless" theme is similarly incisive.