It seems to me that over the past 1 or 2 months a lot of rationalists (N > 7.5) in Berkeley/Oakland have been acting pretty bizarrely (based on my observation of their online behavior). There are three main areas of concern:

  1. An increase in expressed interest in Buddhism/spiritualism/meditation/mindfulness

  2. AI Safety researchers (and associates) talking about really stupid topics

  3. Increased hypomanic behavior and quackery among some people

I don't want to be more specific with #2 and #3 (no identifying information), but I think many people have seen at least #1 occurring.

I don't actually have any issues with #1 by itself (I am interested in these sorts of things), but I'm wondering why it's spreading more over the past 1 or 2 months.

Anyway, #2 and #3 are somewhat concerning, and I'm wondering if there's any overlap with #1.

Sorry for being vague and not providing examples of #2 and #3.

New Comment
13 comments, sorted by Click to highlight new comments since:
[-]ö90

The vagueness of your observations makes your question completly meaningless.

I am wondering if there are other people on LW who have noticed these things. I expect most people reading the OP to either not know what I am talking about, or to have a guess at what I am referring to, but disagree that these things are concerning.

[-]Elo80

I am one of the target in this description. AMA!

But more seriously; thank you for the concern we (the people in this cluster) appreciate it. we are fine over here in the "temporarily extra weird looking" place. We are researching and investigating a rather specific experience over here. If we find the good things, we are bringing them back. If not, it will look like a phase.

Personally it feels like there is no going back and it's worth more going forward while confusing a few people than going back. I can't officially speak for anyone else other than to say - I'm not mad. Things are better despite them looking worse occasionally. I'll get back to "you" (generally) with progress soon.

Also worth noting that: A. I am not "your" problem B. I didn't necessarily consent to your concerns C. I didn't necessarily sign up to need to placate your concerns either. You having emotions upsetness isn't necessarily my problem but I'm doing a kindness here by responding. I recognise this very bluntly these days. (emotions, El, asking, giving, needs, etc.)

I (and we, I am guessing) would appreciate your trust in our ability to progress (as opposed to doubt). Be skeptical but curious (as opposed to skeptical and dismissive)

Also worth noting that: A. I am not “your” problem B. I didn’t necessarily consent to your concerns C. I didn’t necessarily sign up to need to placate your concerns either. You having emotions upsetness isn’t necessarily my problem but I’m doing a kindness here by responding. I recognise this very bluntly these days. (emotions, El, asking, giving, needs, etc.)

I am not OP, but:

Your response seems to assume that the OP’s concerns are about you. With respect, this does not seem like a justified assumption, given the nature (pseudonymous and public, instead of direct and personal) of the OP’s expression of concern.

An analogy to illustrate my point:

Carol: Lately, a bunch of folks in our community have been going around and keying people’s cars. Sometimes they also spray-paint obscene words on the car.

Dave: I am one of the people you’re talking about, and I am not your problem! I didn’t consent to your concern. You being upset over me keying and spray-painting people’s cars is not my problem, and I am not responsible for placating your concern about this.

An odd response, on Dave’s part, no? Certainly, there is a sense in which Carol, out of concern for Dave, is alarmed that Dave would behave in this way; and she may think that Dave would be better off if he discontinued this behavior. But that can hardly be called the primary motivation behind Carol’s expression of concern.

The main problem that concerns Carol is the effects that Dave’s actions have other than on Dave himself.

[-]Elo10

With respect, aside from being part of the bay area geography. This post is about me, as a person, exploring Buddhism and doing weird woo and mystical things.

Equally your response is an example of what you seem to be complaining about. Difference is that I responded to the central post. You are dragging me into a distraction.

What do you mean with "stupid topics"? Are you talking about topics like the simulation hypothesis?

Simulation hypothesis stuff is on-topic for AI safety, but by "stupid topics" I mean certain low quality and off-topic topics, on the level of analyzing an NFL match. I think some people know what topics I am talking about (but probably most people reading my OP don't).

I don't see why AI safety researchers should only talk about topics that are on-topic. It's fine for humans to talk about lots of topics and a norm that people in our community shouldn't talk about off-topic issues like the analysis of NFL matches feels culty.

I'm curious who the half is and why. Is it that they are half a rationalist? Half (the time?) in Berkeley? (If it is not half the time then where is the other half?)

Also. The N should be equal to the cardinality of the entire set of rationalists you interacted with, not just of those who are going insane; so, if you have been interacting with seven and a half rationalists in total, how many of those are diving into the woo? Or if you have been interacting with dozens of rationalists, how many times more people were they than 7.5?

If you think someone is hypomanic, or in the process of falling for quackery, or otherwise having something that looks like it is or could develop into a mental health issue, I'd recommend letting them know directly, or bringing it up in a 1:1 conversation with a mutual friend who might have some visibility into the situation. Publicly saying "I see this happening in unspecified anonymous people" is just going to make people nervous and defensive, without enabling any discussion or action.

“[L]etting them know directly” doesn’t scale, at all; is difficult (in several different ways); requires social capital; is damaging to said social capital; requires a personal relationship with a person; and, critically, doesn’t address root causes or do much to prevent the spread of memetic infection. It is a total non-starter as a solution.

If you don't want to have the conversations yourself and don't know who to talk to about it, consider reaching out to one of the members of the REACH Panel

Buddhism and eastern mysticism have been gaining in popularity lately, so more people in the Rationalists space are likely to adopt it too (it's also the West coast). Also, my initial thought is that Buddhism, spirituality, etc are not exactly against Rationalist ideas. I see no concern.