Thanks - appreciate the upvote and encouragement to discuss. I'll take this opportunity to point out some observations about rationalist communities:
The response from Habryka points out several factual inaccuracies, but I don't see anything that directly refutes the core issue the article brings up. I recognize that engaging with the substance of the allegations might be awkward and difficult, not constituting "winning" in the rationalist sense.
My experience and observations of the rationalist community have been completely resonant with this section:
Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.”
HoSang added: “From there, they anoint themselves the elite managers of these forces, investing in the ‘winners’ as they see fit.”
“The presence of Stephen Hsu here is particularly alarming,” HoSang concluded. “He’s often been a bridge between fairly explicit racist and antisemitic people like Ron Unz, Steven Sailer and Stefan Molyneux and more mainstream figures in tech, investment and scientific research, especially around human genetics.”
In addition to predictably focusing on dismissing the article on the basis of the reporter acting in "bad faith," I'd be curious to see if there is any framing whatsoever that would facilitate, or even encourage, some community-wide introspection.
At least one of them has explicitly indicated they left because of AI safety concerns, and this thread seems to be insinuating some concern - Ilya Sutskever's conspicuous silence has become a meme, and Altman recently expressed that he is uncertain of Ilya's employment status. There still hasn't been any explanation for the boardroom drama last year.
If it was indeed run-of-the-mill office politics and all was well, then something to the effect of "our departures were unrelated, don't be so anxious about the world ending, we didn't see anything alarming at OpenAI" would obviously help a lot of people and also be a huge vote of confidence for OpenAI.
It seems more likely that there is some (vague?) concern but it's been overridden by tremendous legal/financial/peer motivations.
I've been thinking about these allegations often in the context of Altman's firing circus a few months ago. I've known multiple people who suffered early childhood abuse/sexual trauma - and even dated one for a few tumultuous years a decade ago. I had a perfectly normal, happy childhood myself, and eventually came to learn that this disconnect between who they were most times vs times of high-stress was tremendously unintuitive (and initially intriguing) for me. It also seemed to facilitate an certain meticulousness in duplicity/compartmentalization of presenting the required image and confidently saying whatever needed to be said, which often yielded great success in many situations.
Elon Musk, as another example, has been quite public about his difficult childhood - and how it might have helped him professionally, and there is ample corroboration for this. There are also definite allusions to some psycho-sexual aspects.
I cannot help but see patterns of Extreme Disconnection with Sam and consequently with OpenAI. There seems to be a clear division between people who are on his side, and people who aren't. He was quite literally fired for not being candid with the OpenAI's board, and his initial reaction was completely contradictory to the tone and messaging of "benefit for all mankind". The (mostly) seamless transition from a relentlessly vocalized emphasis on the "open" benevolent non-profit with an all-powerful board to whatever OpenAI is now, the selective silence of the board and especially Ilya Sutskever, presumably in the face of legal and financial muscle-flexing, Geoffrey Irving's tweet - all seem to speak to this idea of a world in which many well meaning, intelligent people who have never been in actual conflict with him, and have massive aligned incentives, would readily believe him to be a certain kind of "good" person X who would never extrapolate to be a kind of "bad" person Y, not accounting for the unconscious-level disconnection that undergirds this.
I guess I'm wondering if I'm being unreasonably concerned about this in regard to the "future of humanity", or just projecting my own biases and experiences.
I see a response to my reply above saying "This seems to misunderstand the thing that it argues against". I wasn't arguing against anything specific - this was my attempt to understand why rationalists repeatedly fall into this pattern, but I must have missed something.
I spent a few difficult hours today reading through the discussion on the Manifest allegations on the EA forum and Twitter (figured it's an appropriate way to spend Juneteenth) and my thoughts have converged to this tweet by Shakeel.
I'm done with reading or posting on LW (like I mentioned, I've had past in-person experience in this realm), but I'm leaving this suggestion here for any person of color or anyone who is firmly opposed to racism trying to disambiguate the extent of racism in the rationalist community - RUN!