Perplexed comments on Goals for which Less Wrong does (and doesn't) help - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (101)
And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this but did provide no evidence for outsiders to follow through on it. The only exceptional and novel thesis to be found on LW concerns decision theory which is not only buried but which one is not able to judge if it is actually valuable as long as one doesn't have a previous education. The only exceptional and novel belief (prediction) on here is that regarding the risks posed by AGI. As with the former, one is unable to judge any claims as long as one does not read the sequences (as it is claimed). But why would one do so in the first place? Outsiders are unable to judge the credence of this movement except by what their members say about it. This is my problem when I try to introduce people to Less Wrong. They don't see why it is special! They skim over some posts and there's nothing new there. You have to differentiate Less Wrong from other sources of epistemic rationality. What novel concepts are to be found here, what can you learn from Less Wrong that you might not already know or that you won't come across elsewhere?
Your post gives the impression that Less Wrong can provide great insights for personal self-improvement. That is indisputable, but those who it might help won't read it anyway or won't be able to understand it. I just doubt that people like you learn much from it. What have you learnt from Less Wrong, how did it improve your life? I have no formal education but what I've so far read of LW does not seem very impressive in the sense that there was nothing to disagree with me so that I could update my beliefs and improve my decisions. I haven't come across any post that gave me some feeling of great insight, most of it was either obvious or I figured it out myself before (much less formally of course). The most important idea associated with Less Wrong seems to be that concerning friendly AI. What's special about LW is the strong commitment here regarding that topic. That's why I'm constantly picking on it. And the best arguments for why Less Wrong hasn't helped to improve peoples perception about the topic of AI in some cases is that they are intellectual impotent. So if you are not arguing that they should give up, but rather learn more, then I ask how if not through LW, which obviously failed.
To give a summary. Everyone interested to consider reading the sequences won't be able to spot much that he/she doesn't already know or that seems unique. The people who Less Wrong would help the most do not have the necessary education to understand it. And the most important conclusion, that one should care about AGI safety, is insufficiently differentiated from the huge amount of writings concerned with marginal issues about rationality.
Obviously you cannot form a good judgment as to whether a person is a good rationalist by determining whether his opinion on a difficult subject matches your opinion. And, even more obviously, you can't do so based on Anna's authority.
Instead, you need to interact with the person on an issue of intermediate difficulty and notice whether what he says clears cobwebs from your mind and shines light in dark corners. Or whether you come away from the conversation more confused and in the dark than before.
You may notice that I am implicitly defining rationalism in terms of how well a person communicates rather than how well they think. And even more than that, I am focusing on how well he communicates with you, rather than how well he communicates in general. If you wish, you can object, saying "That is not rationalism". Well, perhaps not. But it is the characteristic you should seek out in your interlocutors.