None of these seem to reflect on EY unless you would expect him to be able to predict that a journalist would write an incoherent almost maximally inaccurate description of an event where he criticized an idea for being implausible then banned its discussion for being off-topic/pointlessly disruptive to something like two people or that his clearly written rationale for not releasing the transcripts for the ai box experiments would be interpreted as a recruiting tool for the only cult that requires no contributions to be a part of, doesn't promise its members salvation/supernatural powers, has no formal hierarchy and is based on a central part of economics.
How did you conclude from Nate Soares saying that that the tools to create agi likely already exist that he wanted people to believe he knew how to construct one?
Why were none of these examples mentioned in the original discussion thread and comment section from which a lot of the quoted sections come from?
Because he asked me to figure it out in a way that implied he already had a solution; the assignment wouldn't make sense if it were to locate a non-workable AGI design (as many AI researchers have done throughout the history of the field); that wouldn't at all prove that the pieces to make AGI are already out there. Also, there wouldn't be much reason to think that his sharing a non-workable AGI design with me would be dangerous.
I believe my previous post was low on detail partially due to traumatic conditioning making these things hard to write abou
What string of posts about behavior are you referring to?
The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.
The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.
None of the arguments in this post seem as if they actually indict anything about MIRI or CFAR. The first claim of CFAR/MIRI somehow motivating 4 suicides provides no evidence that CFAR is unique in this regard or conducive to this kind of outcome and seems like a bizarre framing of events considering that stories about things like someone committing suicide out of suspicion over the post office's nefarious agenda generally aren't seen as an issue on the part of the postal service.
Additionally the focus on Roko's Basilisk-esque "info hazards" as a part of ...
To my knowledge this claim seems to be almost entirely fabricated as the only text that is vaguely reminiscent of this claim in the original thread is a claim from Roko that “One might think that the possibility of CEV punishing people couldn’t possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous.” which along with describing the experience of a grand total of one person only refers to anxiety resulting from an idea related to the premises of the thought experiment and not the thought experiment itself.
None of the points on this list about which things are bad, good, effective etc. have any corroborating evidence that the things they are addressing are generally real or the solutions they describe are effective.
Moreover as someone who knows next to nothing about the subject matter all of the supposed positives on this list seem to paint an actively negative picture of the twitter postrat community as a group of people with special needs for not acknowledging reality who only ever communicate using political dog whistles to whine about things on the inter... (read more)