Mirzhan_Irkegulov comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
Thank you for your response, that's really important for me.
I've never seen disparaging of actually helping people on LW. Can you point to examples? Can you argue that it is a tendency? You say that there is lots of outright hostility to anything against x-risks and human misery, except if it's MIRI. I wouldn't even imagine anyone would say that of LW, but maybe I'm blind, so I'll be grateful if you prove me wrong. Yudkowsky is definitely pro-immortality and supported donating to SENS.
I don't even think MIRI and MIRI-leaning LWers are against ongoing AI research. I've never heard anything like “please stop doing any AI until we figure out friendliness”, only “hey, can you please put more effort into friendliness too, it's very important?” And even if you think that MIRI's focus on friendliness is order of magnitude misplaced, it's just a mistake of prioritizing, not a fundamental philosophical blunder. Again, if you can expand on this topic, I would only say thank you.
Maybe “reform” isn't the right word. The Sequences aren't going anywhere, so of course LW will be FAI-centric for a long time, but within LW there is already a substantial amount of people (that's my impression, I never actually counted) who are not simply contrarian, but actually assign different priorities on what should be done about the world. More inline with your thoughts, than Yudkowsky's. Maybe you can still stay and steer this substantial minority in the right direction, instead of useless splitting.
I bet most people on LW are not even high-karma prolific writers, they are less knowledge, less confident, but also more open to contrary views, such as yours. Just by writing one big article about how you think LW's focus is misplaced can be of extreme help for such people. Which, BTW, includes me, because I never posted anything.
I'd actually would love to see you writing articles on all your theses here, on LW. LW-critical articles were already promoted a few times, including Yvain's article, so it's not like LW is criticism-intolerant.
If you actually do that, and provide lots of examples and evidence, it would be a breathe of fresh air for all those people, who will continue to be attracted to LW. You don't have to put titanic effort into “reform”, just erect a pole.
I was actually making a specific allusion to the hostility towards practical, near-term artificial general intelligence work. I have at times in the past advocated for working on AGI technology now, not later, and been given robotic responses that I'm offering reckless and dangerous proposals, and helpfully directed to go read the sequences. I once joined #lesswrong on IRC and introduced myself as someone interested in making progress in AGI in the near-term, and received two separate death threats (no joke). Maybe that's just IRC—but I left and haven't gone back.
Things have changed, believe me.
Can you point to some examples? Yvain's article was recently on the Main page under Featured articles, for example.
I don't know exactly what process generates the featured articles, but I don't think it has much to do with the community's current preoccupations.
I don't know exact process either, but I always thought somebody deliberately chooses them each week, because often they are around the same topic. So somebody thought it's a good idea to encourage everybody to read an LW-critical article.
My point is, I don't believe LW community suddenly became intolerant to criticism. Or incapable of dialog on whether FAI is a good thing. Or fanatically believing in FAI and Yudkowsky's ideas. Oh, and I'm happy to be proven otherwise!
Seriously, look at top 30 day contributors:
Only So8res is associated with MIRI, AFAIK. My impression from comments of the people above is that they are pretty much capable of dialog and are not fanatical about FAI at all.
Meaning that in Mark's map LW community is something different than in territory. He think he leaves a crazy cult producing a memetic hazard. I think he leaves a community of pretty much independent-thinking people, who could easily counter MIRI's memes.
That is, even if Mark is completely correct about MIRI, his leaving is irrelevant, it's not a net improvement, but some strange unrelated act with negative utility.
My point was that it has become a lot more tolerant.
Maybe, but the core beliefs and cultural biases haven't changed, in the years that I've been here.
But you didn't get karmassinated or called an idiot.
This is true. I did not expect the overwhelmingly positive response I got...
See http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/