All of bn22's Comments + Replies

None of the points on this list about which things are bad, good, effective etc. have any corroborating evidence that the things they are addressing are generally real or the solutions they describe are effective.

Moreover as someone who knows next to nothing about the subject matter all of the supposed positives on this list seem to paint an actively negative picture of the twitter postrat community as a group of people with special needs for not acknowledging reality who only ever communicate using political dog whistles to whine about things on the inter... (read more)

None of these seem to reflect on EY unless you would expect him to be able to predict that a journalist would write an incoherent almost maximally inaccurate description of an event where he criticized an idea for being implausible then banned its discussion for being off-topic/pointlessly disruptive to something like two people or that his clearly written rationale for not releasing the transcripts for the ai box experiments would be interpreted as a recruiting tool for the only cult that requires no contributions to be a part of, doesn't promise its members salvation/supernatural powers, has no formal hierarchy and is based on a central part of economics.

8Yitz
I would not expect EY to have predicted that himself, given his background. If, however, he either had studied PR deeply or had consulted with a domain expert before posting, then I would have totally expected that result to be predicted with some significant likelihood. Remember, optimally good rationalists should win, and be able to anticipate social dynamics. In this case EY fell into a social trap he didn’t even know existed, so again, I do not blame him personally, but that does not negate the fact that he’s historically not been very good at anticipating that sort of thing, due to lack of training/experience/intuition in that field. I’m fairly confident that at least regarding the Roko’s Basilisk disaster, I would have been able to predict something close to what actually happened if I had seen his comment before he posted it. (This would have been primarily due to pattern matching between the post and known instances of the Striezand Effect, as well as some amount of hard-to-formally-explain intuition that EY’s wording would invoke strong negative emotions in some groups, even if he hadn’t taken any action. Studying “ratio’d” tweets can help give you a sense for this, if you want to practice that admittedly very niche skill). I’m not saying this to imply that I’m a better rationalist than EY (I’m not), merely to say that EY—and the rationalist movement generally—hasn’t focused on honing the skillset necessary to excel at PR, which has sometimes been to our collective detriment.

How did you conclude from Nate Soares saying that that the tools to create agi likely already exist that he wanted people to believe he knew how to construct one?

Why were none of these examples mentioned in the original discussion thread and comment section from which a lot of the quoted sections come from?

3Thomas Kehrenberg
If someone told me to come up with an AGI design and that I already knew the parts, then I would strongly suspect that person was trying to make me do a Dantzig to find the solution. (Me thinking that would of course make it not really work.)
  1. Because he asked me to figure it out in a way that implied he already had a solution; the assignment wouldn't make sense if it were to locate a non-workable AGI design (as many AI researchers have done throughout the history of the field); that wouldn't at all prove that the pieces to make AGI are already out there. Also, there wouldn't be much reason to think that his sharing a non-workable AGI design with me would be dangerous.

  2. I believe my previous post was low on detail partially due to traumatic conditioning making these things hard to write abou

... (read more)

What string of posts about behavior are you referring to?

The only minutely similar things I know of are about the management of Leverage research (which doesn’t seem related to rationalism at all outside of geographical proximity) which only ever seems to have been discussed in terms of criticism on LW.

The only other is one semi recent thread where the author inferred the coordinated malicious intent of MIRI and the existence of self-described demons from extremely shaky grounds of reasoning none of which involve any “weird, abusive, and cultish behavior among some community leader rationalists”.

-6frontier64
0ChristianKl
Given that there's no public explanation of why the word demon is used and potential infohazards involved in talking about that, there's little way from the outside to judge the grounds based on which the word is used.  There was research into paranormal phenomena that lead to that point and that research should be considered inherently risky and definately under the label "weird".  Whether or not the initiating research project is worthwhile to be done is debatable given that the kind of research can lead to interesting insights, but it's weird/risky. 

People already implicitly consider your example to be acceptable given that vegetables are held in conditions of isolation that would be considered torture if they were counterfactually conscious and many people support being allowed to kill/euthanize vegetables in cases such as Terry Schiavo's.

None of the arguments in this post seem as if they actually indict anything about MIRI or CFAR. The first claim of CFAR/MIRI somehow motivating 4 suicides provides no evidence that CFAR is unique in this regard or conducive to this kind of outcome and seems like a bizarre framing of events considering that stories about things like someone committing suicide out of suspicion over the post office's nefarious agenda generally aren't seen as an issue on the part of the postal service.

Additionally the focus on Roko's Basilisk-esque "info hazards" as a part of ... (read more)

To my knowledge this claim seems to be almost entirely fabricated as the only text that is vaguely reminiscent of this claim in the original thread is a claim from Roko that “One might think that the possibility of CEV punishing people couldn’t possibly be taken seriously enough by anyone to actually motivate them. But in fact one person at SIAI was severely worried by this, to the point of having terrible nightmares, though ve wishes to remain anonymous.” which along with describing the experience of a grand total of one person only refers to anxiety resulting from an idea related to the premises of the thought experiment and not the thought experiment itself.