Making fun of things is actually really easy if you try even a little bit. Nearly anything can be made fun of, and in practice nearly anything is made fun of. This is concerning for several reasons.
First, if you are trying to do something, whether or not people are making fun of it is not necessarily a good signal as to whether or not it's actually good. A lot of good things get made fun of. A lot of bad things get made fun of. Thus, whether or not something gets made fun of is not necessarily a good indicator of whether or not it's actually good.[1] Optimally, only bad things would get made fun of, making it easy to determine what is good and bad - but this doesn't appear to be the case.
Second, if you want to make something sound bad, it's really easy. If you don't believe this, just take a politician or organization that you like and search for some criticism of it. It should generally be trivial to find people that are making fun of it for reasons that would sound compelling to a casual observer - even if those reasons aren't actually good. But a casual observer doesn't know that and thus can easily be fooled.[2]
Further, the fact that it's easy to make fun of things makes it so that a clever person can find themselves unnecessarily contemptuous of anything and everything. This sort of premature cynicism tends to be a failure mode I've noticed in many otherwise very intelligent people. Finding faults with things is pretty trivial, but you can quickly go from "it's easy to find faults with everything" to "everything is bad." This tends to be an undesirable mode of thinking - even if true, it's not particularly helpful.
[1] Whether or not something gets made fun of by the right people is a better indicator. That said, if you know who the right people are you usually have access to much more reliable methods.
[2] If you're still not convinced, take a politician or organization that you do like and really truly try to write an argument against that politician or organization. Note that this might actually change your opinion, so be warned.
Do you have any actual reason (introspection doesn't count) to "expect LWers to be the kind of high-NFC/TIE people who try to weigh evidence in a two-sided way before deciding"? I'm not asking if you can fathom or rationalize up a reason, I'm requesting the raw original basis for the assumption.
Your reduced optimism is a recognition within my assessment rather than without it; you agree, but you see deeper properties. Nonsensical arguments are not useful after a certain point, naturally, but where the point lies is a matter we can only determine after assessing each nonsensical idea in turn. We can detect patterns among the space of nonsensical hypotheses, but we'd be neglecting our duty as rationalists and Bayesians alike if we didn't properly break down each hypothesis in turn to determine its proper weight and quality over the space of measured data. Solomonoff induction is what it is because it takes every possibility into account. Of course if I start off a discussion saying nonsense is useful, you can well predict what the reaction to that will be. It's useful, to start off, from a state of ignorance. (The default state of all people, LessWrongers included.)
Yes, that is the thing which I do credit LessWrong on. The problem is in the rate of advancement; nobody is really getting solid returns on this investment. It's useful, but not in excess of the average usefulness coming from any other field of study or social process.
I have a strong opinion on this that LessWrong has more or less instructed me to censor. Suffice to say I am personally content with leaving that funding and effort in place.
That is intensely interesting and the kind of thing I'd yell at you for not looking more into, let alone remembering only dimly. Events like these are where we're beginning to detect returns on all this investment. I would immediately hold an interview in response to such a stimulus.
That is, word for word, thought for thought that wrote it, perception for perception that generated the thoughts, the exact basis of the understanding that leads me to make the arguments I am making now.
This is, primarily, why I do things other than oppose the subject bans. Leaving it banned, leaving it taboo, dampens the powder considerably. This is where I can help, if LessWrong could put up with the fact that I know how to navigate the transition. But of course that's an extraordinary claim; I'm not allowed to make it. First I have to give evidence that I can do it. Do what? Improve LessWrong on mass scale. Evidence of that? In what form? Should I bring about the Singularity? Should I improve some other (equally resistant) rationalist community? What evidence can I possibly give of my ability to do such a thing? (The last person I asked this question to was unable to divine the answer.)
I'm left with having to argue that I'm on a level where I can manage a community of rationalists. It's not an argument any LessWronger is going to like very much at all. You're able to listen to it now because you're not the average LessWronger. You're different, and if you've properly taken the time to reflect on the opening question of this comment, you'll know exactly why that is. I'm not telling you this to flatter you (though it is reason to be flattered), but rather because I need to you to be slightly more self-aware in order for you to see the true face of LessWrong that's hidden behind your assumption that the members of the mass are any bit similar to yourself on an epistemic level. How exactly to utilize that is something I've yet to fully ascertain, but it is advanced by this conversation.
Interesting article, and I'm surprised/relieved/excited to see just how upvoted it's been. I can say this much: Wanting the last word, wanting to Correct the Internet... These are useful things that advance rationality. Apathy is an even more powerful force than either of those. I know a few ways to use it usefully. You're part of the solution, but you're not seeing it yet, because you're not seeing how far behind the mass really is.
LessWrong is a single point within a growing Singularity. I speak in grandiose terms because the implications of LessWrong's existence, growth, and path, is itself grand. Politics is one of three memetically spread conversational taboos, outside of LessWrong. LessWrong merely formalized this generational wisdom. As Facebook usage picks up, and the art of internet argument is brought to the masses, we're seeing an increase in socioeconomic and sociopolitical debate. This is correct, and useful. However, nobody aside from myself and a few others that I've met seem to be noticing this. LessWrong itself is going to become generationally memetic. This is correct, and useful. When, exactly, this will happen, is a function primarily of society. What, exactly, LessWrong looks like at that moment in history will offset billions of fates. Little cracks and biases will form cavernous gaps in a civilization's mindset. This moment in history is far off, so we're safe for the time being. (If that moment were right now, I would be spending as much of time time as possible working on AGI to crush the resulting leviathan.)
Focusing on this one currently-LessWrong-specific meme, what do you see happening if LW's memetic moment were right now? Now is LessWrong merely restraining its own members?
[Comment length reached, continuing...]
I agree with that, read literally, but I disagree with the implied conclusion. Nonsensical arguments hit diminishing (and indeed negative) returns so quickly that in practice they're nearly useless. (There are situations where this isn't so... (read more)