Mirzhan_Irkegulov comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
My basic thesis is that even if that was not the intent, the result has been the production of idiots. Specifically, a type of idiotic madness that causes otherwise good people, self-proclaimed humanitarians to disparage the only sort of progress which has the potential to alleviate all human suffering, forever, on accelerated timescales. And they do so for reasons that are not grounded in empirical evidence, because they were taught though demonstration modes of non-empirical thinking from the sequences, and conditioned to think this was okay through social engagement on LW.
When you find yourself digging a hole, the sensible and correct thing to do is stop digging. I think we can do better, but I'm burned out on trying to reform from the inside. Or perhaps I'm no longer convinced that reform can work given the nature of the medium (social pressures of blog posts and forums work counter to the type of rationality that should be advocated for).
I don't want to take that away. But for me LW was not just a baptismal fount for discovering rationality, it was also an effort to get people to work on humanitarian relief and existential risk reduction. I hope you don't think me crazy for saying that LW has had a subject matter bias in these directions. But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky's specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.
I am myself working on various projects in my life which I expect to have positive effects on the world. Outside of work, LW has at times occupied a significant fraction of my leisure time. This must be seen as an activity of higher utility than working more hours on my startup, making progress on my molecular nanotech and AI side projects, or enriching myself personally in other ways (family time, reading, etc.). I saw the Rationality reading group as a chance to do something which would conceivably grow that community by a measurable amount, thereby justifying a time expenditure. However if all I am doing is bringing more people into a community that is actively working against developments in artificial intelligence that have a chance of relieving human suffering within a single generation… the Hippocratic corpus comes to mind: “first, do no harm.”
I am not sure yet what I will fill the time with. Maybe I'll get off my butt and start making more concrete progress on some of the nanotech and AI stuff that I have been letting slide in recent years.
I recognize also that I am making broad generalizations which do not always apply to everyone. You seem to be an exception, and I wish I had engaged with you more. I also will miss TheAncientGeek's contrarian posts, as well as many others who deserve credit for not following a herd mentality.
Thank you for your response, that's really important for me.
I've never seen disparaging of actually helping people on LW. Can you point to examples? Can you argue that it is a tendency? You say that there is lots of outright hostility to anything against x-risks and human misery, except if it's MIRI. I wouldn't even imagine anyone would say that of LW, but maybe I'm blind, so I'll be grateful if you prove me wrong. Yudkowsky is definitely pro-immortality and supported donating to SENS.
I don't even think MIRI and MIRI-leaning LWers are against ongoing AI research. I've never heard anything like “please stop doing any AI until we figure out friendliness”, only “hey, can you please put more effort into friendliness too, it's very important?” And even if you think that MIRI's focus on friendliness is order of magnitude misplaced, it's just a mistake of prioritizing, not a fundamental philosophical blunder. Again, if you can expand on this topic, I would only say thank you.
Maybe “reform” isn't the right word. The Sequences aren't going anywhere, so of course LW will be FAI-centric for a long time, but within LW there is already a substantial amount of people (that's my impression, I never actually counted) who are not simply contrarian, but actually assign different priorities on what should be done about the world. More inline with your thoughts, than Yudkowsky's. Maybe you can still stay and steer this substantial minority in the right direction, instead of useless splitting.
I bet most people on LW are not even high-karma prolific writers, they are less knowledge, less confident, but also more open to contrary views, such as yours. Just by writing one big article about how you think LW's focus is misplaced can be of extreme help for such people. Which, BTW, includes me, because I never posted anything.
I'd actually would love to see you writing articles on all your theses here, on LW. LW-critical articles were already promoted a few times, including Yvain's article, so it's not like LW is criticism-intolerant.
If you actually do that, and provide lots of examples and evidence, it would be a breathe of fresh air for all those people, who will continue to be attracted to LW. You don't have to put titanic effort into “reform”, just erect a pole.
I was actually making a specific allusion to the hostility towards practical, near-term artificial general intelligence work. I have at times in the past advocated for working on AGI technology now, not later, and been given robotic responses that I'm offering reckless and dangerous proposals, and helpfully directed to go read the sequences. I once joined #lesswrong on IRC and introduced myself as someone interested in making progress in AGI in the near-term, and received two separate death threats (no joke). Maybe that's just IRC—but I left and haven't gone back.
Things have changed, believe me.
Can you point to some examples? Yvain's article was recently on the Main page under Featured articles, for example.
I don't know exactly what process generates the featured articles, but I don't think it has much to do with the community's current preoccupations.
I don't know exact process either, but I always thought somebody deliberately chooses them each week, because often they are around the same topic. So somebody thought it's a good idea to encourage everybody to read an LW-critical article.
My point is, I don't believe LW community suddenly became intolerant to criticism. Or incapable of dialog on whether FAI is a good thing. Or fanatically believing in FAI and Yudkowsky's ideas. Oh, and I'm happy to be proven otherwise!
Seriously, look at top 30 day contributors:
Only So8res is associated with MIRI, AFAIK. My impression from comments of the people above is that they are pretty much capable of dialog and are not fanatical about FAI at all.
Meaning that in Mark's map LW community is something different than in territory. He think he leaves a crazy cult producing a memetic hazard. I think he leaves a community of pretty much independent-thinking people, who could easily counter MIRI's memes.
That is, even if Mark is completely correct about MIRI, his leaving is irrelevant, it's not a net improvement, but some strange unrelated act with negative utility.
My point was that it has become a lot more tolerant.
Maybe, but the core beliefs and cultural biases haven't changed, in the years that I've been here.
But you didn't get karmassinated or called an idiot.
This is true. I did not expect the overwhelmingly positive response I got...
See http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/