Mirzhan_Irkegulov comments on Leaving LessWrong for a more rational life - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
I understand “politics is the mind-killer” enough to not consider LW community as a tribe that I have to belong to, and I could easily turn away from LW and say “the Sequences and FAI is nonsense”, just like I turned away from various gurus and ideologies before. But I disagree with what you saying, not the Sequences or MIRI criticism, but with your evaluation of LW community and your unwillingness to engage anymore. Honestly, I'm upset that you suddenly stopped the reading group.
Despite Yudkowsky's obvious leanings, the Sequences are not about FAI, nor they are about Many-Worlds, Tegmark Mathematical Universe, Roko's Basilisk or whatever. They are first and foremost about how to not end up an idiot. They are about how to not become immune to criticism, they are about Human's Guide to Words, they are about System 1 and System 2.
I don't care about Many Worlds, FAI, Fun theory and Jeffreyssai stuff, but LW was the thing that stopped me from being a complete and utter idiot. Now I see that people I care about, due to not internalizing LW's simple truths, are being complete and utter idiots, with their death spirals, and tribal affiliations, and meaningless usage of words, and theories that don't predict shit, and it breaks my heart.
If you want to criticize LW for lack of actual instrumental rationality, you're not the first, Yvain did that in 2009, and he was right in understanding the problem, though he couldn't provide a solution either. I personally believe that combating akrasia is the most important task in the world, not FAI, because if a cure for akrasia could be found, we could train armies of superhuman-scientists, who then would solve cancer, nanotechnology and AI-risk. That's why reading modern cognitive sciences and CBT and neuroscience is probably more important than everything, at least that's what I think.
And here I am, somebody who wishes to be part of LW community, but also disagreeing, either conceptually or politically, with much of the LW memes. Yet you don't want to engage with me anymore. LW is not a monolith, where everybody follows Yudkowsky, it's the most contrarian (and thus mentally healthy) place I've ever seen on the Web.
LW is not end of it all, but the Sequences are the bare minimum that people require to be sane. Hey, some people through sheer study of maths and physics can develop correct epistemology, so they don't need the Sequences, but I wasn't, and many people aren't.
It's not about tribal things. If you had your own forum with lots of people, who share similar criticism of LW, hey, I'd go there and leave LW. But you don't have such forum, so by leaving LW you just leave people like me alone. What's the point of that? Do you really believe leaving LW like that is more utility, than trying to create an island within it?
Honestly, I even started thinking the only reason you wrote this post because you realized you're too lazy to continue the reading group, so you needed a good excuse. But that's ridiculous, and I assign very low probability to that.
The sole point of my comment is this. I'm not upset because of your fundamental disagreement with Yudkowsky and LW's ideology and memes. I'm upset because you stop the reading group, which is important, because, like I said, the Sequences are about basic rational thinking, not deep philosophy, in which Yudkowsky indeed might be completely wrong. I'm upset, because your departure would mean that you think that LW is completely lost, and there is not at least a sizable minority, who'd say “you know what, you're right, let's do something about it”. That's sad.
(I'll update this post with more thoughts)
I've always had the impression that Eliezer intended them to lead a person from zero to FAI. So I'm not sure you're correct here.
...but that being said, the big Less Wrong takeaways for me were all from Politics is the Mind-Killer and the Human's Guide to Words -- in that those are the ones that have actually changed my behavior and thought processes in everyday life. They've changed the way I think to such an extent that I actually find it difficult to have substantive discussions with people who don't (for example) distinguish between truth and tribal identifiers, distinguish between politics and policy, avoid arguments over definitions, and invoke ADBOC when necessary. Being able to have discussions without running over such roadblocks is a large part of why I'm still here, even though my favorite posters all seem to have moved on. Threads like this one basically don't happen anywhere else that I'm aware of.
Someone recently had a blog post summarizing the most useful bits of LW's lore, but I can't for the life of me find the link right now.
I'm not sure if this is what you were thinking of (seeing as how it's about a year old now), but "blog post summarizing the most useful bits of LW's lore" makes me think of Yvain's Five Years and One Week of Less Wrong.
Eliezer states this explicitly on numerous occasions, that his reason for writing the blog posts was to motivate people to work with him on FAI. I'm having trouble coming up with exact citations however, since it's not very google-able.
My prior perception of the sequences was that EY started from a firm base of generlaly good advice about thinking. Sequences like Human's guide to words and How to actually change your mind stand on their own. He then however went off the deep end trying to extend and apply these concepts to questions in the philosophy of the mind, ethics, and decision theory in order to motivate an interest in friendly AI theory.
I thought that perhaps the mistakes made in those sequences where correctable one-off errors. Now I am of the opinion that the way in which that philosophical inquiry was carried out doomed the project to failure from the start, even if the details of the failure is subject to Yudkowsky's own biases. Reasoning by thought experiment only over questions that are not subject to experimental validation basically does nothing more than expose one's priors. And either you agree with the priors, or you don't. For example, does quantum physics support the assertion that identity is the instance of computation or the information being computed? Neither. But you could construct a thought experiment which validates either view based on the priors you bring to the discussion, and I have wasted much time countering his thought experiments with those of my own creation before I understood the Sisyphean task I was undertaking :\
As another person who thinks that the Sequences and FAI are nonsense (more accurately, the novel elements in the Sequences are nonsense; most of them are not novel), I have my own theory: LW is working by accidentally being counterproductive. You have people with questionable beliefs, who think that any rational person would just have to believe them. So they try to get everyone to become rational, thinking it would increase belief in those things. Unfortunately for them, when they try this, they succeed too well--people listen to them and actually become more rational, and actually becoming rational doesn't lead to belief in those things at all. Sometimes it even provides more reasons to oppose those things--I hadn't heard of Pascal's Mugging before I came here, and it certainly wasn't intended to be used as an argument against cryonics or AI risk, but it's pretty useful for that purpose anyway.
How is Pascal's Mugging an argument against cryonics?
It's an argument against "even if you think the chance of cryonics working is low, you should do it because if it works, it's a very big benefit".
Ok, it's an argument against a specific argument for cryonics. I'm ok with that (it was a bad argument for cryonics to start with). Cryonics does have a lot of problems, not least of which is cost. The money spent annually on life insurance premiums for cryopreservation of a ridiculously tiny segment of the population is comparable to the research budget for SENS which would benefit everybody. What is up with that.
That said, I'm still signing up for Alcor. But I'm aware of the issues :\
Clarification: I don't think they're nonsense, even though I don't agree with all of them. Most of them just haven't had the impact of PMK and HGW.
My basic thesis is that even if that was not the intent, the result has been the production of idiots. Specifically, a type of idiotic madness that causes otherwise good people, self-proclaimed humanitarians to disparage the only sort of progress which has the potential to alleviate all human suffering, forever, on accelerated timescales. And they do so for reasons that are not grounded in empirical evidence, because they were taught though demonstration modes of non-empirical thinking from the sequences, and conditioned to think this was okay through social engagement on LW.
When you find yourself digging a hole, the sensible and correct thing to do is stop digging. I think we can do better, but I'm burned out on trying to reform from the inside. Or perhaps I'm no longer convinced that reform can work given the nature of the medium (social pressures of blog posts and forums work counter to the type of rationality that should be advocated for).
I don't want to take that away. But for me LW was not just a baptismal fount for discovering rationality, it was also an effort to get people to work on humanitarian relief and existential risk reduction. I hope you don't think me crazy for saying that LW has had a subject matter bias in these directions. But on at least some of these accounts the effect had by LW and/or MIRI and/or Yudkowsky's specific focus on these issues may be not just suboptimal, but actually negative. To be precise: it may actually be causing more suffering than would otherwise exist.
We are finally coming out of a prolonged AI winter. And although funding is finally available to move the state of the art in automation forward, to accelerate progress in life sciences and molecular manufacturing that will bring great humanitarian change, we have created a band of Luddites that fear the solution more than the problem. And in a strange twist of double-think, consider themselves humanitarians for fighting progress.
I am myself working on various projects in my life which I expect to have positive effects on the world. Outside of work, LW has at times occupied a significant fraction of my leisure time. This must be seen as an activity of higher utility than working more hours on my startup, making progress on my molecular nanotech and AI side projects, or enriching myself personally in other ways (family time, reading, etc.). I saw the Rationality reading group as a chance to do something which would conceivably grow that community by a measurable amount, thereby justifying a time expenditure. However if all I am doing is bringing more people into a community that is actively working against developments in artificial intelligence that have a chance of relieving human suffering within a single generation… the Hippocratic corpus comes to mind: “first, do no harm.”
I am not sure yet what I will fill the time with. Maybe I'll get off my butt and start making more concrete progress on some of the nanotech and AI stuff that I have been letting slide in recent years.
I recognize also that I am making broad generalizations which do not always apply to everyone. You seem to be an exception, and I wish I had engaged with you more. I also will miss TheAncientGeek's contrarian posts, as well as many others who deserve credit for not following a herd mentality.
Thank you for your response, that's really important for me.
I've never seen disparaging of actually helping people on LW. Can you point to examples? Can you argue that it is a tendency? You say that there is lots of outright hostility to anything against x-risks and human misery, except if it's MIRI. I wouldn't even imagine anyone would say that of LW, but maybe I'm blind, so I'll be grateful if you prove me wrong. Yudkowsky is definitely pro-immortality and supported donating to SENS.
I don't even think MIRI and MIRI-leaning LWers are against ongoing AI research. I've never heard anything like “please stop doing any AI until we figure out friendliness”, only “hey, can you please put more effort into friendliness too, it's very important?” And even if you think that MIRI's focus on friendliness is order of magnitude misplaced, it's just a mistake of prioritizing, not a fundamental philosophical blunder. Again, if you can expand on this topic, I would only say thank you.
Maybe “reform” isn't the right word. The Sequences aren't going anywhere, so of course LW will be FAI-centric for a long time, but within LW there is already a substantial amount of people (that's my impression, I never actually counted) who are not simply contrarian, but actually assign different priorities on what should be done about the world. More inline with your thoughts, than Yudkowsky's. Maybe you can still stay and steer this substantial minority in the right direction, instead of useless splitting.
I bet most people on LW are not even high-karma prolific writers, they are less knowledge, less confident, but also more open to contrary views, such as yours. Just by writing one big article about how you think LW's focus is misplaced can be of extreme help for such people. Which, BTW, includes me, because I never posted anything.
I'd actually would love to see you writing articles on all your theses here, on LW. LW-critical articles were already promoted a few times, including Yvain's article, so it's not like LW is criticism-intolerant.
If you actually do that, and provide lots of examples and evidence, it would be a breathe of fresh air for all those people, who will continue to be attracted to LW. You don't have to put titanic effort into “reform”, just erect a pole.
I was actually making a specific allusion to the hostility towards practical, near-term artificial general intelligence work. I have at times in the past advocated for working on AGI technology now, not later, and been given robotic responses that I'm offering reckless and dangerous proposals, and helpfully directed to go read the sequences. I once joined #lesswrong on IRC and introduced myself as someone interested in making progress in AGI in the near-term, and received two separate death threats (no joke). Maybe that's just IRC—but I left and haven't gone back.
Things have changed, believe me.
Can you point to some examples? Yvain's article was recently on the Main page under Featured articles, for example.
I don't know exactly what process generates the featured articles, but I don't think it has much to do with the community's current preoccupations.
I don't know exact process either, but I always thought somebody deliberately chooses them each week, because often they are around the same topic. So somebody thought it's a good idea to encourage everybody to read an LW-critical article.
My point is, I don't believe LW community suddenly became intolerant to criticism. Or incapable of dialog on whether FAI is a good thing. Or fanatically believing in FAI and Yudkowsky's ideas. Oh, and I'm happy to be proven otherwise!
Seriously, look at top 30 day contributors:
Only So8res is associated with MIRI, AFAIK. My impression from comments of the people above is that they are pretty much capable of dialog and are not fanatical about FAI at all.
Meaning that in Mark's map LW community is something different than in territory. He think he leaves a crazy cult producing a memetic hazard. I think he leaves a community of pretty much independent-thinking people, who could easily counter MIRI's memes.
That is, even if Mark is completely correct about MIRI, his leaving is irrelevant, it's not a net improvement, but some strange unrelated act with negative utility.
My point was that it has become a lot more tolerant.
Maybe, but the core beliefs and cultural biases haven't changed, in the years that I've been here.
But you didn't get karmassinated or called an idiot.
See http://lesswrong.com/lw/igf/the_genie_knows_but_doesnt_care/
If I understand correctly, you think that LW, MIRI and other closely related people might have a net negative impact, because they distract some people from contributing to the more productive subareas/approaches of AI research and existential risk prevention, directing them to subareas which you estimate to be much less productive. For the sake of argument, let's assume that is correct and if all people who follow MIRI's approach to AGI turned to those subareas of AI that are more productive, it would be a net benefit to the world. But you should consider the other side of the medallion, that is, doesn't blogs like LessWrong or books like that of N.Bostrom's actually attract some students to consider working on AI, including the areas you consider beneficial, who would otherwise be working in areas that are unrelated to AI? Wouldn't the number of people who have even heard about the concept of existential risks be smaller without people like Yudkowsky and Bostrom? I don't have numbers, but since you are concerned about brain drain in other subareas of AGI and existential risk research, do you think it is unlikely that popularization work done by these people would attract enough young people to AGI in general and existential risks in general that would compensate for the loss of a few individuals, even in subareas of these fields that are unrelated to FAI?
But do people here actually fight progress? Has anyone actually retired from (or was dissuaded from pursuing) AI research after reading Bostrom or Yudkowsky?
If I understand you correctly, you fear that concerns about AI safety, being a thing that might invoke various emotions in a listener's mind, is a thing that is sooner or later bound to be picked up by some populist politicians and activists who would sow and exploit these fears in the minds of general population in order to win elections/popularity/prestige among their peers/etc., thus leading to various regulations and restrictions on funding, because that is what these activists (who got popular and influential by catering to the fears of the masses) would demand?
I'm not sure how someone standing on a soapbox and yelling "AI is going to kill us al!" (Bostrom, admittedly not a quote) can be interpreted as actually helping get more people into practical AI research and development.
You seem to be presenting a false choice: is there more awareness of AI in a world with Bostrom et al, or the same world without? But it doesn't have to be that way. Ray Kurzweil has done quite a bit to keep interest in AI alive without fear mongering. Maybe we need more Kurzweils and fewer Bostroms.
Data point: a feeling that I ought to do something about AI risk is the only reason why I submitted an FLI grant proposal that involves some practical AI work, rather than just figuring that the field isn't for me and doing something completely different.
I don't know how many copies of Bostrom's book were sold, but it was on New York Times list. Some of those books were read by high school students. Since very few people leave practical AI research for FAI research, even if only a tiny fraction of these young readers read the book and think "This AI thing is really exciting and interesting. Instead of majoring in X (which is unrelated to AI), I should major in computer science and focus on AI", it would probably result in net gain for practical AI research.
I argued against this statement:
When people say that an action leads to a negative outcome, they usually mean that taking that action is worse than not taking it, i.e. they compare the result to zero. If you add another option, then the word "suboptimal" should be used instead. Since I argued against "negativity", and not "suboptimality", I dont' think that the existence of other options is relevant here.
Interesting, I seem to buck the herd in nearly exactly the opposite manner as you.
Meaning?
You buck the herd by saying their obsession with AI safety is preventing them from participating in the complete transformation of civilization.
I buck the herd by saying that the whole singulatarian complex is a chimera that has almost nothing to do with how reality will actually play out and its existence as a memeplex is explained primarily by sociological factors rather than having much to do with actual science and technology and history.
Oh, well I mostly agree with you there. Really ending aging will have a transformative effect on society, but the invention of AI is not going to radically alter power structures in the way that singulatarians imagine.
See, I include the whole 'immanent radical life extension' and 'Drexlerian molecular manufacturing' idea sets in the singulatarian complex...
The craziest person in the world can still believe the sky is blue.
Ah, but in this case as near as i can tell it is actually orange.
This just seems stupid to me. Ending aging is fundamentally SLOW change. In 100 or 200 or 300 years from now, as more and more people gain access to anti-aging (since it will start off very expensive), we can worry about that. But conscious AI will be a force in the world in under 50 years. And it doesn't even have to be SUPER intelligent to cause insane amounts of social upheaval. Duplicability means that even 1 human level AI can be world-wide or mass produced in a very short time!
"Will"? You guarantee that?
"The medical revolution that began with the beginning of the twentieth century had warped all human society for five hundred years. America had adjusted to Eli Whitney's cotton gin in less than half that time. As with the gin, the effects would never quite die out. But already society was swinging back to what had once been normal. Slowly; but there was motion. In Brazil a small but growing, alliance agitated for the removal of the death penalty for habitual traffic offenders. They would be opposed, but they would win."
Larry Niven: The Gift From Earth
Well there are some serious ramifications that are without historical precedent. For example, without menopause it may perhaps become the norm for women to wait until retirement to have kids. It may in fact be the case that couples will work for 40 years, have a 25-30 year retirement where they raise a cohort of children, and then re-enter the work force for a new career. Certainly families are going to start representing smaller and smaller percentages of the population as birth rates decline while people get older and older without dying. The social ramifications alone will be huge, which was more along the lines of what I was talking about.
Can you link to a longer analysis of yours regarding this?
I simply feel overwhelmed when people discuss AI. To me intelligence is a deeply anthropomorphic category, includes subcategories like having a good sense of humor. Reducing it to optimization, without even sentience or conversational ability with self-consciousness... my brain throws out the stop sign already at this point and it is not even AI, it is the pre-studies of human intelligence that already dehumanize, deanthromorphize the idea of intelligence and make it sound more like a simple and brute-force algorithm. Like Solomonoff Induction, another thing that my brain completely freezes over: how can you have truth and clever solutions without even really thinking, just throwing a huge number of random ideas in and seeing what survives testing? Would it all be so quantitative? Can you reduce the wonderful qualities of the human mind to quantities?
Intelligence to what purpose?
Nobody's saying AI will be human without humor, joy, etc. The point is AI will be dangerous, because it'll have those aspects of intelligence that make us powerful, without those that make us nice. Like, that's basically the point of worrying about UFAI.
But is it possible to have power without all the rest?
Certainly. Why not?
Computers already can outperform you in a wide variety of tasks. Moreover, today, with the rise of machine learning, we can train computers to do pretty high-level things, like object recognition or senitment analysis (and sometimes outperform humans in these tasks). Isn't it power?
As for Solomonoff induction... What do you think your brain is doing when you are thinking? Some kind of optimized search in hypotheses space, so you consider only a very very small set of hypotheses (compared to the entire space), hopefully good enough ones. While Solomonoff induction checks all of them, every single hypothesis, and finds the best.
Solomonoff induction is so much thinking that it is incomputable.
Since we don't have that much raw computing power (and never will have), the hypotheses search must be heavily optimized. Throwing off unpromising directions of search. Searching in regions with high probability of success. Using prior knowledge to narrow search. That's what your brain is doing, and that's what machines will do. That's not like "simple and brute-force", because simple and brute-force algorithms are either impractically slow, or incomputable at all.
Eagles, too: they can fly and I not. The question is whether the currently foreseeable computerizable tasks are closer to flying or to intelligence. Which in turn depends on how high and how "magic" we see intelligence.
Ugh, using Aristotelean logic? So it is not random hypotheses but causality and logic based.
I think using your terminology, thinking is not the searching, it is the findinging logical relationships so not a lot of space must be searched.
OK, that makes sense. Perhaps we can agree that logic and casuality and actual reasoning is all about narrowing the hypothesis space to search. This is intelligence, not the search.
Yes. Absolutely. When that happens inside a human being's head, we generally call them 'mass murderers'. Even I only cooperate with society because there is a net long term gain in doing so; if that were no longer the case, I honestly don't know what I would do. Awesome, that's something new to think about. Thanks.
That's probably irrelevant, because mass murderers don't have power without all the rest. They are likely to have sentience and conversational ability with self-consciousness, at least.
Not sure. Suspect nobody knows, but seems possible?
I think the most instructive post on this is actually Three Worlds Collide, for making a strong case for the arbitrary nature of our own "universal" values.