XiXiDu comments on Downvote stalkers: Driving members away from the LessWrong community? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (128)
He receives a massive number of likes there, no matter what he writes. My guess is that he needs that kind of feedback, and he doesn't get it here anymore. Recently he requested that a certain topic should not be mentioned on the HPMOR subreddit, or otherwise he would go elsewhere. On Facebook he can easily ban people who mention something he doesn't like.
Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?
Why don't you leave armchair Internet psychoanalysis to experts?
I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."
If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.
You could have simply not responded.
It wasn't, no. It was a reminder to everyone else of XiXi's general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.
Circa 2005 I had a link to MIRI (then called the Singularity Institute) on my homepage. Circa 2009 I've even been advertising LessWrong.
I am on record as saying that I believe most of the sequences to consist of true and sane material. I am on record as saying that I believe LessWrong to be the most rational community.
But in 2010, due to some incidence that may not be mentioned here, I noticed that there are some extreme tendencies and beliefs that might easily outweigh all the positive qualities. I also noticed that a certain subset of people seems to have a very weird attitude when it comes to criticism pertaining Yudkowsky, MIRI or LW.
I've posted a lot of arguments that were never meant to decisively refute Yudkowksky or MIRI, but to show that many of the extraordinary claims can be weakened. The important point here is that I did not even have to do this, as the burden of evidence is not on me disprove those claims, but on the people who make the claims. They need to prove that their claims are robust and not just speculations on possible bad outcomes.
You keep saying this and things like it, and not providing any evidence whatsoever when asked, directly or indirectly.
A win would be if certain people became a little less confident about the extraordinary claims he makes, and more skeptical of the mindset that CFAR spreads.
A win would be if he became more focused on exploration rather than exploitation, on increasing the robustness of his claims, rather than on taking actions in accordance with his claims.
A world in which I don't criticize MIRI is a world where they ask for money in order to research whether artificial intelligence is an existential risk, rather than asking for money to research a specific solution in order to save an intergalactic civilization.
A world in which I don't criticize Yudkowsky is a world in which he does not make claims such as that if you don’t sign up your kids for cryonics then you are a lousy parent.
A world in which I don't criticize CFAR/LW is a world in which they teach people to be extremely skeptical of back-of-the-envelope calculations, a world in which they tell people to strongly discount claims that cannot be readily tested.
I speculate that Yudkowsky has narcissistic tendencies. Call it armchair psychoanalysis if you like, but I think there is enough evidence to warrant such speculations.
I call it an ignoble personal attack which has no place on this forum.
Sorry. It wasn't meant as an attack, just something that came to my mind reading the comment by Chris Hallquist.
My initial reply was based on the following comment by Yudkowsky:
And regarding narcissism, the definition is: "an inflated sense of one's own importance and a deep need for admiration."
See e.g. this conversation between Ben Goertzel and Eliezer Yudkowsky (note that MIRI was formerly known as SIAI):
Also see e.g. this comment by Yudkowsky:
...and from his post...
And this kind of attitude started early. See for example what he wrote in his early "biography":
Also see this video:
That's the dictionary definition. When throwing around accusations of mental pathology, though, it behooves one not to rely on pattern-matching to one-sentence definitions; it overestimates the prevalence of problems, suggests the wrong approaches to them, and tends to be considered rude.
Having a lot of ambition and an overly optimistic view of intelligence in general and one's own intelligence in particular doesn't make you a narcissist, or every fifteen-year-old nerd in the world would be a narcissist.
(That said, I'm not too impressed with Eliezer's reasons for moving to Facebook.)
I feel that similar accusation could be used against anyone who feels that more is possible and instead of whining tries to win.
I am not an expert on narcissism (though I could be expert at it, heh), but seems to me that a typical narcissistic person would feel they deserve admiration without doing anything awesome. They probably wouldn't be able to work hard, for years. (But as I said, I am not an expert; there could be multiple types of narcissism.)
Thinking that one person is going to save the world, and you're him, qualifies as "an inflated sense of one's own importance", IMO.
First mistake: believing that one person will be saving the world. Second mistake: there is likely only one person that can do it, and he's that person.
To put the first quotation into some context, Eliezer argued that his combination of high SAT scores and spending a lot of effort in studying AI puts him in a unique position that can make a "difference between cracking the problem of intelligence in five years and cracking it in twenty-five". (Which could make a huge difference, if it saves Earth from destruction by nanotechnology, presumably coming during that interval...)
Of course, knowing that it was written in 2000, the five-years estimate was obviously wrong. And there is a Sequence about it, which explains that Friendly AI is more complicated than just any AI. (Which doesn't prove that the five-years estimate would be correct for any AI.)
Most people very seriously studying AI probably have high SATs too. High IQs. High lots of things. And some likely have other unique qualities and advantages that Eliezer doesn't.
Unique in some qualities doesn't mean uniquely capable of the task in some timeline.
My main objection is that until it's done, I don't think people are very justified in claims to know what it will take to get done, and therefore unjustified in claiming some particular person is best able to do it, even if he is best suited to pursue one particular approach to the problem.
Hence, I conclude he is overestimating his importance, per the definition. Not that I see it as some heinous crime. He's over confident. So what? It seems to be an ingredient to high achievement. Better to be over confident epistemologically than under confident instrumentally.
Private overconfidence is harmless. Public overconfidence is how cults start.
Well, I'm sorry but when you dig up quotes of your opponent to demonstrate purported flaws in his character, it is a personal attack. I didn't expect to encounter this sort of thing in LessWrong. Given the number of upvotes your comment received, I can understand why Eliezer prefers Facebook.
Yudkowsky tells other people to get laid. He is asking the community to downvote certain people. He is calling people permanent idiots.
He is a forum moderator. He asks people for money. He wants to create the core of the future machine dictator that is supposed to rule the universe.
Given the above, I believe that remarks about his personality are warranted, and not attacks, if they are backed up by evidence (which I provided in other comments above).
But note that in my initial comment, which got this discussion started, I merely uttered a guess on why Yudowsky might now prefer Facebook over LessWrong. Then a comment forced me to escalate this by providing further justification for uttering this guess. Your comments further forced me to explain myself. Which resulted in a whole thread about Yudkowsky's personality.
Just curious: what else do you consider the big problems of CFAR (other than being associated with MIRI)?