“5-year-old in a hot 20-year-old’s body.”
40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.
To me, two of the stories look like they are about the same person and that person has been banned from multiple rationalist spaces without the journalist considering it important to mention that.
Yeah, this seems very likely to be about Michael Vassar. Also, HPMOR spoiler:
I also think him "bragging" about this is quite awkward, since modeling literal Voldemort after you is generally not a compliment. I also wouldn't believe that "bragging" has straightforwardly occurred.
FWIW, I'm a female AI alignment researcher and I never experienced anything even remotely adjacent to sexual misconduct in this community. (To be fair, it might be because I'm not young and attractive; more likely the Bloomberg article is just extremely biased.)
Damning allegations; but I expect this forum to respond with minimization and denial.
One quoted section is about Jessica Taylor's post on LW, which was controversial but taken seriously. (I read a draft of the post immediately preceding it and encouraged her to post it on LW.) Is that minimization or denial?
Out of the other quoted sections (I'm not going to click thru), allegations are only against one named person; Brent Dill. We took that seriously at the time and I later banned him from LessWrong. Is that minimization or denial?
To be clear, I didn't ban him directly for the allegations, but for related patterns of argumentation and misbehavior. I think the risks of online spaces are different from the risks of in-person spaces; like the original Oxford English Dictionary, I think Less Wrong the website should accept letters from murderers in asylums, even if those people shouldn't be allowed to walk the streets. I think it's good for in-person events and organizations do their part to keep their local communities welcoming and safe, while it isn't the place of the whole internet to try to adjudicate those issues; we don't have enough context to litigate them in a fair and wise w...
A lot of the defenses here seem to be relying on the fact that one of the accused individuals was banned from several rationalist communities a long time ago. While this definitely should have been included in the article, I think the overall impression they are giving is misleading.
In 2020, the individual was invited to give a talk for an unofficial SSC online meetup (scott alexander was not involved, and does ban the guy from his events). The post was announced on lesswrong with zero pushback, and went ahead.
Here is a comment from Anna Salamon 2 years ago, discussing him, and stating that his ban on meetups should be lifted:
...I hereby apologize for the role I played in X's ostracism from the community, which AFAICT was both unjust and harmful to both the community and X. There's more to say here, and I don't yet know how to say it well. But the shortest version is that in the years leading up to my original comment X was criticizing me and many in the rationality and EA communities intensely, and, despite our alleged desire to aspire to rationality, I and I think many others did not like having our political foundations criticized/eroded, nor did I and I think various o
I personally think the current relationship the community has to Michael feels about right in terms of distance.
I also want to be very clear that I have not investigated the accusations against Michael and don't currently trust them hugely for a bunch of reasons, though they seem credible enough that I would totally investigate them if I thought that Michael would pose a risk to more people in the community if the accusations were true.
As it is, the current level of distance I don't see it as hugely my, or the rationality community's, responsibility to investigate them though if I had more time and was less crunched, I might.
Several things can be true simultaneously:
To be clear, I'm not at all confident that all of the empirical claims above are true. But it seems that people are using the earlier points as an excuse to ignore the later ones.
I think you are suggesting that I am committing the fallacy of privileging the hypothesis...
No, I am accusing you of falling for a naked political trap. Internet accusations of pedophilia by DNC staffers are not made in good faith, and in fact the content of the accusation (dems tend to be pedophiles) is selected to be maximally f(hard to disprove, disturbing). If the DNC took those reports seriously and started to "allocate resources toward the problem", it would first be a waste of resources, but second (and more importantly) it would lend credibility to the initial accusations no matter what their internal investigation found or what safeguards they put in place. There's no basic reason to believe the DNC contains a higher number of sexual predators than e.g. a chess club, so the review itself is unwarranted and is an example of selective requirements.
In the DNC's case, no one actually expects them to be stupid enough to litigate the claim in public by going over ever time someone connected to the DNC touched a child and debating whether or not it's a fair example. I think that's a plausible outcome for rationalists, though, who are not as famously sensible as DNC staffers.
Damning allegations; but I expect this forum to respond with minimization and denial.
Minimization and denial is appropriate when you're being slandered.
I think abuse issues in rationalist communities are worth discussing, but I don't think people who have been excluded from the community for years are a very productive place to begin such a discussion.
The minimization and denial among these comments is horrifying.
I am a female AI researcher. I come onto this forum for Neel Nanda's interpretability research which has recently been fire. I've experienced abuse in these communities which makes the reaction here all the more painful.
I don't want to come onto this forum anymore.
This is how women get driven out of AI.
It is appropriate to minimize things which are in fact minimal. The majority of these issues have been litigated (metaphorically) before. The fact that they are being brought up over and over again in media articles does not ipso facto mean that the incident has not been adequately dealt with. You can make the argument that these incidents are part of a larger culture problem, but you have to actually make the argument. We're all Bayesians here, so look at the base rates.
The one piece of new information which seems potentially important is the part where Sonia Joseph says, "he followed her home and insisted on staying over." I would like to see that incident looked into a bit more.
Strong upvote. As another female ai researcher: yeah, it's bad here, as it is everywhere to some degree.
To other commenters, especially ones hesitant to agree that there have been problems due to structural issues, claiming otherwise doesn't make this situation look better - the local network of human connections can only look good to the outer world of humans by being precise about what problems have occurred and what actual knowledge and mechanisms can prevent them. you're not gonna retain your looking good points to the public by groveling about it, nor by claiming there's no issue; you'll retain looking good points by actually considering the problem each time it comes up, discussing the previous discussions, etc. (though, of course, like, efficiently, according to taste. Not everyone has to write huge braindumps like I find myself often wanting to.) nobody can tell you how to be a good person; just be one. put your zines in the slot in the door and we'll copy em and print em out. but dont worry about making a fuss apologizing; make a fuss explaining a verifiable understanding.
Some local fragments of social network are somewhat well protected by local emotional habits; but many...
It seems from your link like CFAR has taken responsibility, taken corrective action, and states how they’ll do everything in their power to avoid a similar abuse incident in the future.
I think in general the way to deal with abuse situations within an organization is to identify which authority should be taking appropriate disciplinary action regarding the abuser’s role and privileges. A failure to act there, like CFAR’s admitted process failure that they later corrected, would be concerning if we thought it was still happening.
If every abuse is being properly disciplined by the relevant organization, and the rate of abuse isn’t high compared to the base rate in the non-rationalist population, then the current situation isn’t a crisis - even if some instances of abuse unfortunately involve the perpetrator referencing rationality or EA concepts.
I think the healthy and compassionate response to this article would be to focus on addressing the harms victims have experienced. So I find myself disappointed by much of the voting and comment responses here.
I agree that the Bloomberg article doesn't acknowledge that most of the harms that they list have been perpetrated by people who have already mostly been kicked out of the community, and uses some unfair framings. But I think the bigger issue is that of harms experienced by women that may not have been addressed: that of unreported cases, and of insu...
I read it, wish I hadn't. It's the usual thing with very large amounts of smart-sounding words and paragraphs, and a very small amount of thought being used to generate them.
This seems like a bad rule of thumb. If your social circle is largely comprised of people who have chosen to remain within the community, ignoring information from "outsiders" seems like a bad strategy for understanding issues with the community.
Damning allegations; but I expect this forum to respond with minimization and denial.
This is so spectacularly bad faith that it makes me think the reason you posted this is pretty purely malicious.
Out of all of the LessWrong and 'rationalist' "communities" that have existed, how many are ones for which any of the alleged bad acts occurred? One? Two?
Out of all of the LessWrong users and 'rationalists', how many have been accused of these alleged bad acts? Mostly one or two?
My having observed extremely similar dynamics about, e.g. sexual harassment, in se...
I read the first half and kind of zoned out -- I wish that the author had shown any examples of communities lacking such problems, to contrast EA against.
How do you expect journalism to work? The author is trying to contribute one specific story, in detail. Readers have other experiences to compare and draw from. If this was an academic piece, I might be more sympathetic.
I feel confused by this argument.
The core thesis of the post seems to rely on the level of abuse in this community being substantially higher than in other communities (the last sentence seems to make that pretty explicit). I think if you want to compellingly argue for your thesis you should provide the best evidence you have for that thesis. Journalism commonly being full of fallacious reasoning doesn't mean that it's good or forgivable for journalism to reason fallaciously.
I do think journalists from time to time summarizing and distilling concrete data is good, but in that case people still clearly benefit if the data is presented in a relatively unbiased way that doesn't distort the underlying truth a lot, or omits crucial pieces of information that the journalist very likely knew but didn't contribute to their narrative. I think journalists not doing that is condemnable and the resulting articles are rarely worth reading.
I don't think the core thesis is "the level of abuse in this community is substantially higher than in others". Even if we (very generously) just assumed that the level of abuse in this community was lower than that in most places, these incidents would still be very important to bring up and address.
When an abuse of power arises the organisation/community in which it arises has roughly two possible approaches - clamping down on it or covering it up. The purpose of the first is to solve the problem, the purpose of the second is to maintain the reputation of the organisation. (How many of those catholic church child abuse stories were covered up because they were worried about the reputational damage to the church). By focusing on the relative abuse level it seem like you are seeing these stories (primarily) as an attack on the reputation of your tribe ("A blue abused someone? No he didn't its Green propaganda!"). Does it matter whether the number of children abused in the catholic church was higher than the number abused outside it?
If that is the case, then there is nothing wrong with that emotional response. If you feel a sense of community with a group and you yourself have...
Yeah, I might want to write a post that tries to actually outline the history of abuse that I am aware of, without doing weird rhetorical tricks or omitting information. I've recently been on a bit of a "let's just put everything out there in public" spree, and I would definitely much prefer for new people to be able to get an accurate sense of the risk of abuse and harm, which, to be clear, is definitely not zero and feels substantial enough that people should care about it.
I do think the primary reason why people haven't written up stuff in the past is exactly because they are worried their statements will get ripped out of context and used as ammunition in hit pieces like this, so I actually think articles like this make the problem worse, not better, though I am not confident of this, and the chain of indirect effects is reasonably long here.
A bit of searching brings me to https://elephantinthelab.org/sexual-harassment-in-academia/ :
Is Sexual Harassment in the Academy a Problem?
Yes. Research on sexual harassment in the academy suggests that it remains a prevalent problem. In a 2003 study examining incidences of sexual harassment in the workplace across private, public, academic, and military industries, Ilies et al (2003) found academia to have the second highest rates of harassment, second only to the military. More recently, a report by the The National Academies of Sciences, Engineering, and Medicine (NASEM) summarized the persistent problem of sexual harassment in academia with regard to faculty-student harassment, as well as faculty-faculty harassment. To find more evidence of this issue, one can also turn to Twitter – as Times Higher Education highlighted in their 2019 blog.
Another paper suggests:
In 2019, the Association of American Universities surveyed 33 prominent research universities and found 13% of all students experienced a form of sexual assault and 41.8% experienced sexual harassment (Cantor et al., Citation2020).
Mainstream academia is not free from sexual abuse.
It is. But if someone is saying "this group of people is notably bad" then it's worth asking whether they're actually worse than other broadly similar groups of people or not.
I think the article, at least to judge from the parts of it posted here, is arguing that rationalists and/or EAs are unusually bad. See e.g. the final paragraph about paperclip-maximizers.
Whether it matters what other broadly similar groups do depends on what you're concerned with and why.
If you're, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. ("Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren't really trying hard; we've got to do much better than that." "Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can't do better.")
So for people in that situation, I think your point of view is just right. But:
If you're someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, ...) for similar reasons. If it turns out that rationalists or EAs are pretty much like t...
That article had me horrified. But I was hoping the reactions would point to empathy and a commitment to concrete improvement.
The opposite happened, the defensive and at times dismissive or demanding comments made it worse. It was the responses here and on the effective altruism forum that had me reassess EA related groups as likely unsafe to work for.
This sounds like a systematic problem related to the way this community is structured, and the community response seems aimed not at fixing the problem, but at justifying why it isn't getting fixed, abusing rationality to frame abuse as normal and inevitable.
At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.
Like... an actual evil amoral BBEG wizard? Is this something true rationalists now brag about?
Just because someone uses rationality toolset doesn't make them role model :(
It didn't have to be revealed. That Quirrel was Voldemort was obvious almost within the first chapter introducing him (eg already taken for granted in the earliest top-level discussion page in 2010), to the extent that fandebate over 'Quirrelmort' was mostly "he's so obviously Voldemort, just like in canon - but surely Yudkowsky would never make it that easy, so could he possibly be someone or something else? An impostor? some sort of diary-like upload? a merger of Quirrel/Voldemort? Harry from the future?" Bragging about being the inspiration for a character so nakedly amoral, at best, that the defense in fan debates is "he's too evil to be Voldemort!" is not a great look no matter when in the series he was doing that bragging.
Try non-paywalled link here.
Damning allegations; but I expect this forum to respond with minimization and denial.
A few quotes: