Eneasz comments on Goals for which Less Wrong does (and doesn't) help - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (101)
And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this but did provide no evidence for outsiders to follow through on it. The only exceptional and novel thesis to be found on LW concerns decision theory which is not only buried but which one is not able to judge if it is actually valuable as long as one doesn't have a previous education. The only exceptional and novel belief (prediction) on here is that regarding the risks posed by AGI. As with the former, one is unable to judge any claims as long as one does not read the sequences (as it is claimed). But why would one do so in the first place? Outsiders are unable to judge the credence of this movement except by what their members say about it. This is my problem when I try to introduce people to Less Wrong. They don't see why it is special! They skim over some posts and there's nothing new there. You have to differentiate Less Wrong from other sources of epistemic rationality. What novel concepts are to be found here, what can you learn from Less Wrong that you might not already know or that you won't come across elsewhere?
Your post gives the impression that Less Wrong can provide great insights for personal self-improvement. That is indisputable, but those who it might help won't read it anyway or won't be able to understand it. I just doubt that people like you learn much from it. What have you learnt from Less Wrong, how did it improve your life? I have no formal education but what I've so far read of LW does not seem very impressive in the sense that there was nothing to disagree with me so that I could update my beliefs and improve my decisions. I haven't come across any post that gave me some feeling of great insight, most of it was either obvious or I figured it out myself before (much less formally of course). The most important idea associated with Less Wrong seems to be that concerning friendly AI. What's special about LW is the strong commitment here regarding that topic. That's why I'm constantly picking on it. And the best arguments for why Less Wrong hasn't helped to improve peoples perception about the topic of AI in some cases is that they are intellectual impotent. So if you are not arguing that they should give up, but rather learn more, then I ask how if not through LW, which obviously failed.
To give a summary. Everyone interested to consider reading the sequences won't be able to spot much that he/she doesn't already know or that seems unique. The people who Less Wrong would help the most do not have the necessary education to understand it. And the most important conclusion, that one should care about AGI safety, is insufficiently differentiated from the huge amount of writings concerned with marginal issues about rationality.
I disagree strongly. Using myself as my only data-point (flawed, I know, but deeply relevant for me) the exact opposite is true. I had enough education to understand (nearly) everything (some of the more advanced math took extra study to follow). But I had never been exposed to such a large amount of concentrated sanity in writing. The greatest asset of LW wasn't that it provided education I didn't have, but rather that it provided sanity I'd never been exposed to. That made a huge difference.