Sarunas comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
My basic thesis is that even if that was not the intent, the result has been the production of idiots. Specifically, a type of idiotic madness that causes otherwise good people, self-proclaimed humanitarians to disparage the only sort of progress which has the potential to alleviate all human suffering, forever, on accelerated timescales. And they do so for reasons that are not grounded in empirical evidence, because they were taught though demonstration modes of non-empirical thinking from the sequences, and conditioned to think this was okay through social engagement on LW.
When you find yourself digging a hole, the sensible and correct thing to do is stop digging. I think we can do better, but I'm burned out on trying to reform from the inside. Or perhaps I'm no longer convinced that reform can work given the nature of the medium (social pressures of blog posts and forums work counter to the type of rationality that should be advocated for).
I don't want to take that away. But for me LW was not just a baptismal fount for discovering rationality, it was also an effort to get people to work on humanitarian relief and existential risk reduction. I hope you don't think me crazy for saying that LW has had a subject matter bias in these directions. But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky's specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.
I am myself working on various projects in my life which I expect to have positive effects on the world. Outside of work, LW has at times occupied a significant fraction of my leisure time. This must be seen as an activity of higher utility than working more hours on my startup, making progress on my molecular nanotech and AI side projects, or enriching myself personally in other ways (family time, reading, etc.). I saw the Rationality reading group as a chance to do something which would conceivably grow that community by a measurable amount, thereby justifying a time expenditure. However if all I am doing is bringing more people into a community that is actively working against developments in artificial intelligence that have a chance of relieving human suffering within a single generation… the Hippocratic corpus comes to mind: “first, do no harm.”
I am not sure yet what I will fill the time with. Maybe I'll get off my butt and start making more concrete progress on some of the nanotech and AI stuff that I have been letting slide in recent years.
I recognize also that I am making broad generalizations which do not always apply to everyone. You seem to be an exception, and I wish I had engaged with you more. I also will miss TheAncientGeek's contrarian posts, as well as many others who deserve credit for not following a herd mentality.
If I understand correctly, you think that LW, MIRI and other closely related people might have a net negative impact, because they distract some people from contributing to the more productive subareas/approaches of AI research and existential risk prevention, directing them to subareas which you estimate to be much less productive. For the sake of argument, let's assume that is correct and if all people who follow MIRI's approach to AGI turned to those subareas of AI that are more productive, it would be a net benefit to the world. But you should consider the other side of the medallion, that is, doesn't blogs like LessWrong or books like that of N.Bostrom's actually attract some students to consider working on AI, including the areas you consider beneficial, who would otherwise be working in areas that are unrelated to AI? Wouldn't the number of people who have even heard about the concept of existential risks be smaller without people like Yudkowsky and Bostrom? I don't have numbers, but since you are concerned about brain drain in other subareas of AGI and existential risk research, do you think it is unlikely that popularization work done by these people would attract enough young people to AGI in general and existential risks in general that would compensate for the loss of a few individuals, even in subareas of these fields that are unrelated to FAI?
But do people here actually fight progress? Has anyone actually retired from (or was dissuaded from pursuing) AI research after reading Bostrom or Yudkowsky?
If I understand you correctly, you fear that concerns about AI safety, being a thing that might invoke various emotions in a listener's mind, is a thing that is sooner or later bound to be picked up by some populist politicians and activists who would sow and exploit these fears in the minds of general population in order to win elections/popularity/prestige among their peers/etc., thus leading to various regulations and restrictions on funding, because that is what these activists (who got popular and influential by catering to the fears of the masses) would demand?
I'm not sure how someone standing on a soapbox and yelling "AI is going to kill us al!" (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn't have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn't for me and doing something completely different.
I don't know how many copies of Bostrom's book were sold, but it was on New York Times list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think "This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI", it would probably result in net gain for practical AI research.
I argued against this statement:
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word "suboptimal" should be used instead. Since I argued against "negativity", and not "suboptimality", I dont' think that the existence of other options is relevant here.