Try non-paywalled link here.
Damning allegations; but I expect this forum to respond with minimization and denial.
A few quotes:
At the same time, she started to pick up weird vibes. One rationalist man introduced her to another as “perfect ratbait”—rat as in rationalist. She heard stories of sexual misconduct involving male leaders in the scene, but when she asked around, her peers waved the allegations off as minor character flaws unimportant when measured against the threat of an AI apocalypse. Eventually, she began dating an AI researcher in the community. She alleges that he committed sexual misconduct against her, and she filed a report with the San Francisco police. (Like many women in her position, she asked that the man not be named, to shield herself from possible retaliation.) Her allegations polarized the community, she says, and people questioned her mental health as a way to discredit her. Eventually she moved to Canada, where she’s continuing her work in AI and trying to foster a healthier research environment.
Of the subgroups in this scene, effective altruism had by far the most mainstream cachet and billionaire donors behind it, so that shift meant real money and acceptance. In 2016, Holden Karnofsky, then the co-chief executive officer of Open Philanthropy, an EA nonprofit funded by Facebook co-founder Dustin Moskovitz, wrote a blog post explaining his new zeal to prevent AI doomsday. In the following years, Open Philanthropy’s grants for longtermist causes rose from $2 million in 2015 to more than $100 million in 2021.
Open Philanthropy gave $7.7 million to MIRI in 2019, and Buterin gave $5 million worth of cash and crypto. But other individual donors were soon dwarfed by Bankman-Fried, a longtime EA who created the crypto trading platform FTX and became a billionaire in 2021. Before Bankman-Fried’s fortune evaporated last year, he’d convened a group of leading EAs to run his $100-million-a-year Future Fund for longtermist causes.
Even leading EAs have doubts about the shift toward AI. Larissa Hesketh-Rowe, chief operating officer at Leverage Research and the former CEO of the Centre for Effective Altruism, says she was never clear how someone could tell their work was making AI safer. When high-status people in the community said AI risk was a vital research area, others deferred, she says. “No one thinks it explicitly, but you’ll be drawn to agree with the people who, if you agree with them, you’ll be in the cool kids group,” she says. “If you didn’t get it, you weren’t smart enough, or you weren’t good enough.” Hesketh-Rowe, who left her job in 2019, has since become disillusioned with EA and believes the community is engaged in a kind of herd mentality.
In extreme pockets of the rationality community, AI researchers believed their apocalypse-related stress was contributing to psychotic breaks. MIRI employee Jessica Taylor had a job that sometimes involved “imagining extreme AI torture scenarios,” as she described it in a post on LessWrong—the worst possible suffering AI might be able to inflict on people. At work, she says, she and a small team of researchers believed “we might make God, but we might mess up and destroy everything.” In 2017 she was hospitalized for three weeks with delusions that she was “intrinsically evil” and “had destroyed significant parts of the world with my demonic powers,” she wrote in her post. Although she acknowledged taking psychedelics for therapeutic reasons, she also attributed the delusions to her job’s blurring of nightmare scenarios and real life. “In an ordinary patient, having fantasies about being the devil is considered megalomania,” she wrote. “Here the idea naturally followed from my day-to-day social environment and was central to my psychotic breakdown.”
Taylor’s experience wasn’t an isolated incident. It encapsulates the cultural motifs of some rationalists, who often gathered around MIRI or CFAR employees, lived together, and obsessively pushed the edges of social norms, truth and even conscious thought. They referred to outsiders as normies and NPCs, or non-player characters, as in the tertiary townsfolk in a video game who have only a couple things to say and don’t feature in the plot. At house parties, they spent time “debugging” each other, engaging in a confrontational style of interrogation that would supposedly yield more rational thoughts. Sometimes, to probe further, they experimented with psychedelics and tried “jailbreaking” their minds, to crack open their consciousness and make them more influential, or “agentic.” Several people in Taylor’s sphere had similar psychotic episodes. One died by suicide in 2018 and another in 2021.
Within the group, there was an unspoken sense of being the chosen people smart enough to see the truth and save the world, of being “cosmically significant,” says Qiaochu Yuan, a former rationalist.
Yuan started hanging out with the rationalists in 2013 as a math Ph.D. candidate at the University of California at Berkeley. Once he started sincerely entertaining the idea that AI could wipe out humanity in 20 years, he dropped out of school, abandoned the idea of retirement planning, and drifted away from old friends who weren’t dedicating their every waking moment to averting global annihilation. “You can really manipulate people into doing all sorts of crazy stuff if you can convince them that this is how you can help prevent the end of the world,” he says. “Once you get into that frame, it really distorts your ability to care about anything else.”
That inability to care was most apparent when it came to the alleged mistreatment of women in the community, as opportunists used the prospect of impending doom to excuse vile acts of abuse. Within the subculture of rationalists, EAs and AI safety researchers, sexual harassment and abuse are distressingly common, according to interviews with eight women at all levels of the community. Many young, ambitious women described a similar trajectory: They were initially drawn in by the ideas, then became immersed in the social scene. Often that meant attending parties at EA or rationalist group houses or getting added to jargon-filled Facebook Messenger chat groups with hundreds of like-minded people.
The eight women say casual misogyny threaded through the scene. On the low end, Bryk, the rationalist-adjacent writer, says a prominent rationalist once told her condescendingly that she was a “5-year-old in a hot 20-year-old’s body.” Relationships with much older men were common, as was polyamory. Neither is inherently harmful, but several women say those norms became tools to help influential older men get more partners. Keerthana Gopalakrishnan, an AI researcher at Google Brain in her late 20s, attended EA meetups where she was hit on by partnered men who lectured her on how monogamy was outdated and nonmonogamy more evolved. “If you’re a reasonably attractive woman entering an EA community, you get a ton of sexual requests to join polycules, often from poly and partnered men” who are sometimes in positions of influence or are directly funding the movement, she wrote on an EA forum about her experiences. Her post was strongly downvoted, and she eventually removed it.
The community’s guiding precepts could be used to justify this kind of behavior. Many within it argued that rationality led to superior conclusions about the world and rendered the moral codes of NPCs obsolete. Sonia Joseph, the woman who moved to the Bay Area to pursue a career in AI, was encouraged when she was 22 to have dinner with a 40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. Joseph says he also argued that it was normal for a 12-year-old girl to have sexual relationships with adult men and that such relationships were a noble way of transferring knowledge to a younger generation. Then, she says, he followed her home and insisted on staying over. She says he slept on the floor of her living room and that she felt unsafe until he left in the morning.
On the extreme end, five women, some of whom spoke on condition of anonymity because they fear retribution, say men in the community committed sexual assault or misconduct against them. In the aftermath, they say, they often had to deal with professional repercussions along with the emotional and social ones. The social scene overlapped heavily with the AI industry in the Bay Area, including founders, executives, investors and researchers. Women who reported sexual abuse, either to the police or community mediators, say they were branded as trouble and ostracized while the men were protected.
In 2018 two people accused Brent Dill, a rationalist who volunteered and worked for CFAR, of abusing them while they were in relationships with him. They were both 19, and he was about twice their age. Both partners said he used drugs and emotional manipulation to pressure them into extreme BDSM scenarios that went far beyond their comfort level. In response to the allegations, a CFAR committee circulated a summary of an investigation it conducted into earlier claims against Dill, which largely exculpated him. “He is aligned with CFAR’s goals and strategy and should be seen as an ally,” the committee wrote, calling him “an important community hub and driver” who “embodies a rare kind of agency and a sense of heroic responsibility.” (After an outcry, CFAR apologized for its “terribly inadequate” response, disbanded the committee and banned Dill from its events. Dill didn’t respond to requests for comment.)
Rochelle Shen, a startup founder who used to run a rationalist-adjacent group house, heard the same justification from a woman in the community who mediated a sexual misconduct allegation. The mediator repeatedly told Shen to keep the possible repercussions for the man in mind. “You don’t want to ruin his career,” Shen recalls her saying. “You want to think about the consequences for the community.”
One woman in the community, who asked not to be identified for fear of reprisals, says she was sexually abused by a prominent AI researcher. After she confronted him, she says, she had job offers rescinded and conference speaking gigs canceled and was disinvited from AI events. She says others in the community told her allegations of misconduct harmed the advancement of AI safety, and one person suggested an agentic option would be to kill herself.
For some of the women who allege abuse within the community, the most devastating part is the disillusionment. Angela Pang, a 28-year-old who got to know rationalists through posts on Quora, remembers the joy she felt when she discovered a community that thought about the world the same way she did. She’d been experimenting with a vegan diet to reduce animal suffering, and she quickly connected with effective altruism’s ideas about optimization. She says she was assaulted by someone in the community who at first acknowledged having done wrong but later denied it. That backpedaling left her feeling doubly violated. “Everyone believed me, but them believing it wasn’t enough,” she says. “You need people who care a lot about abuse.” Pang grew up in a violent household; she says she once witnessed an incident of domestic violence involving her family in the grocery store. Onlookers stared but continued their shopping. This, she says, felt much the same.
The paper clip maximizer, as it’s called, is a potent meme about the pitfalls of maniacal fixation.
Every AI safety researcher knows about the paper clip maximizer. Few seem to grasp the ways this subculture is mimicking that tunnel vision. As AI becomes more powerful, the stakes will only feel higher to those obsessed with their self-assigned quest to keep it under rein. The collateral damage that’s already occurred won’t matter. They’ll be thinking only of their own kind of paper clip: saving the world.
Whether it matters what other broadly similar groups do depends on what you're concerned with and why.
If you're, say, a staff member at an EA organization, then presumably you are trying to do the best you could plausibly do, and in that case the only significance of those other groups would be that if you have some idea how hard they are trying to do the best they can, it may give you some idea of what you can realistically hope to achieve. ("Group X has such-and-such a rate of sexual misconduct incidents, but I know they aren't really trying hard; we've got to do much better than that." "Group Y has such-and-such a rate of sexual misconduct incidents, and I know that the people in charge are making heroic efforts; we probably can't do better.")
So for people in that situation, I think your point of view is just right. But:
If you're someone wondering whether you should avoid associating with rationalists or EAs for fear of being sexually harassed or assaulted, then you probably have some idea of how reluctant you are to associate with other groups (academics, Silicon Valley software engineers, ...) for similar reasons. If it turns out that rationalists or EAs are pretty much like those, then you should be about as scared of rationalists as you are of them, regardless of whether rationalists should or could have done better.
If you're a Less Wrong reader wondering whether these are Awful People that you've been associating with and you should be questioning your judgement in thinking otherwise, then again you probably have some idea of how Awful some other similar groups are. If it turns out that rationalists are pretty much like academics or software engineers, then you should feel about as bad for failing to shun them as you would for failing to shun academics or software engineers.
If you're a random person reading a Bloomberg News article, and wondering whether you should start thinking of "rationalist" and "effective altruist" as warning signs in the same way as you might think of some other terms that I won't specify for fear of irrelevant controversy, then once again you should be calibrating your outrage against how you feel about other groups.
For the avoidance of doubt, I should say that I don't know how the rate of sexual misconduct among rationalists / EAs / Silicon Valley rationalists in particular / ... compares with the rate in other groups, nor do I have a very good idea of how high it is in other similar groups. It could be that the rate among rationalists is exceptionally high (as the Bloomberg News article is clearly trying to make us think). It could be that it's comparable to the rate among, say, Silicon Valley software engineers and that that rate is horrifyingly high (as plenty of other news articles would have us think). It could be that actually rationalists aren't much different from any other group with a lot of geeky men in it, and that groups with a lot of geeky men in them are much less bad than journalists would have us believe. That last one is the way my prejudices lean ... but they would, wouldn't they?, so I wouldn't put much weight on them.
[EDITED to add:] Oh, another specific situation one could be in that's relevant here: If you are contemplating Reasons Why Rationalists Are So Bad (cf. the final paragraph quoted in the OP here, which offers an explanation for that), it is highly relevant whether rationalists are in fact unusually bad. If rationalists or EAs are just like whatever population they're mostly drawn from, then it doesn't make sense to look for explanations of their badness in rationalist/EA-specific causes like alleged tunnel vision about AI.
[EDITED again to add:] To whatever extent the EA community and/or the rationalist community claims to be better than others, of course it is fair to hold them to a higher standard, and take any failure to meet it as evidence against that claim. (Suppose it turns out that the rate of child sex abuse among Roman Catholic clergy is exactly the same as that in some reasonably chosen comparison group. Then you probably shouldn't see Roman Catholic Clergy as super-bad, but you should take that as evidence against any claim that the Roman Catholic Church is the earthly manifestation of a divine being who is the source of all goodness and moral value, or that its clergy are particularly good people to look to for moral advice.) How far either EAs or rationalists can reasonably be held to be making such a claim seems like a complicated question.