LESSWRONG
Petrov Day
LW

449
Adele Lopez
3363Ω28384282
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
4Adele Lopez's Shortform
Ω
5y
Ω
58
Adele Lopez's Shortform
Adele Lopez2d80

My main complaint is negligence, and pathological tolerance of toxic people (like Brent Dill). Specifically, I feel like it's been known by leadership for years that our community has a psychosis problem, and that there has been no visible (to me) effort to really address this.

I sort of feel that if I knew more about things from your perspective, I would be hard-pressed to point out specific things you should have done better, or I would see how you were doing things to address this that I had missed. I nonetheless feel that it's important for people like me to express grievances like this even after thinking about all the ways in which leadership is hard.

I appreciate you taking the time to engage with me here, I imagine this must be a pretty frustrating conversation for you in some ways. Thank you.

Reply
Adele Lopez's Shortform
Adele Lopez2d40

I don't dispute that strong selection effects are at play, as I mentioned earlier.

My contention is with the fact that even among such people, psychosis doesn't just happen at random. There is still an inciting incident, and it often seems that rationalist-y ideas are implicated. More broadly, I feel that there is a cavalier attitude towards doing mentally destabilizing things. And like, if we know we're prone to this, why aren't we taking it super seriously?

The change I want to have happen is for there to be more development of mental techniques/principles for becoming more mentally robust, and for this to be framed as a prerequisite for the Actually Changing Your Mind (and other potentially destabilizing) stuff. Maybe substantial effort has been put into this that I haven't seen. But I would have hoped to have seen some sort of community moment of "oh shit, why does this keep happening?!? let's work together to understand it and figure out how to prevent or protect against it". And in the meantime: more warnings, the way I feel that "meditation" has been more adequately warned of.

Thanks for deciding to do the check-ins; that makes me glad to have started this conversation, despite how uncomfortable confrontation feels for me still. I feel like part of the problem is that this is just an uncomfortable thing to talk about.

My illegible impression is that Lightcone is better at this than past-CFAR was, for a deeper reason than that. (Okay, the Brent Dill drama feels relevant.) 

I'm mostly thinking about cases from years ago, when I was still trying to socially be a part of the community (before ~2018?). There was one person in the last year or so who I was interested in becoming friends with that this then happened to, which made me think it continues to be a problem, but it's possible I over-updated. My models are mainly coming from the AI psychosis cases I've been researching.

Reply11
Adele Lopez's Shortform
Adele Lopez2d50

The data informing my model came from researching AI psychosis cases, and specifically one in which the AI gradually guided a user into modifying his self image (disguised as self-discovery), explicitly instilling magical thinking into him (which appears to have worked). I have a long post about this case in the works, similar to my Parasitic AI post.

After I had the hypothesis, it "clicked" that it also explained past community incidents. I doubt I'm any more clued-in to rationalist gossip than you are. If you tell me that the incidence has gone down in recent years, I think I will believe you.

I feel tempted to patch my model to be about self-image vs self discrepancies upon hearing your model. I think it's a good sign that yours is pretty similar! I don't see why you think prediction of actions is relevant though.

Attempt at gears-level: phenomenal consciousness is the ~result of reflexive-empathy as applied to your self-image (which is of the same type as a model of your friend). So conscious perception depends on having this self-image update ~instantly to current sensations. When it changes rapidly it may fail to keep up. That explains the hallucinations. And when your model of someone changes quickly, you have instincts towards paranoia, or making hasty status updates. These still trigger when the self-image changes quickly, and then loopiness amplifies it. This explains the strong tendency towards paranoia (especially things like "voices inside my head telling me to do bad things") or delusions of grandeur.

[this is a throwaway model, don't take too seriously]

It seems like psychedelics are ~OOM worse than alcohol though, when thinking about base rates?

Hmm... I'm not sure that meaning is a particularly salient differences between mormons and rationalists to me. You could say both groups strive for bringing about a world where Goodness wins and people become masters of planetary-level resources. The community/social-fabric thing seems like the main difference to me (and would apply to WW2 England).

Reply11
Adele Lopez's Shortform
Adele Lopez2d150

Continuation of conversation with Anna Salamon about community psychosis prevalence
 

Original thread: https://www.lesswrong.com/posts/AZwgfgmW8QvnbEisc/cfar-update-and-new-cfar-workshops?commentId=q5EiqCq3qbwwpbCPn

Summary of my view: I'm upset about the blasé attitude our community seems to have towards its high prevalence of psychosis. I think that CFAR/rationalist leadership (in addition to the community-at-large) has not responded appropriately.

I think Anna agrees with the first point but not the second. Let me know if that's wrong, Anna.

My hypothesis for why the psychosis thing is the case is that it has to do with drastic modification of self-image.

Moving conversation here per Anna's request.
----

Anyway, I'm curious to know what you think of my hypothesis, and to brainstorm ways to mitigate the issue (hopefully turning into a prerequisite "CogSec" technique). 
 

Reply
CFAR update, and New CFAR workshops
Adele Lopez2d5-3

I've spent probably 200 hours trying to understand stuff near here, in various ways, across the last 15 years. I don't have a lack of curiosity about it.

That's good to hear. Any insights?

People who run all kinds of psychological workshops or meditation retreats tell me that their workshops can occasionally trigger manic or psychotic states in folks with a predisposition in that direction. (Eg Landmark, several different kinds of meditation, some person I talked to at a conference who did random self-help stuff). My high school friend was told by her psychiatrist not to read philosophy books, because allegedly philosophy books are a common psychosis trigger. Psychedelics, including cannabis, can also trigger mania and psychosis. I suspect there's a common thread running through all of this.

Yeah, there's something fucked up about meditation communities too. And let's not forget Vassar/Vassarites. 

I think the through-line has to do with drastic modification of self-image, which helps explain the AI cases too (or higher rate in trans). It seems to be a lot worse if this modification was pushed on them to any degree. 

(I'm not saying that modification of self-image is categorically bad. It's necessary as your actual self changes, and most people probably have false beliefs here (maybe even all conscious experience according to some). But be careful. Please!)

I'm not really swayed by arguments that our rough neurotype is just more prone to this (almost certainly true), since the inciting incident—when it's not just drugs—usually seems to be some sort of rationality content or technique. People are prone to dying, but we don't just shrug and say "damn that's crazy" when something causes someone to die. There should be a post-mortem analysis, and sign-post warnings. Maybe you've been diligent about this, but the community-at-large seems to have a missing mood here. More public boggling would have been nice.

In terms of how risky CFAR workshops in particular are (I'm sharing data here, not trying to argue that they are or aren't): about 1800 people have attended 4.5-day or longer events with us. From this set, I am aware of two full-blown manic or psychotic episodes happening at or shortly after a workshop: one from the early participant I mentioned above, and one from someone in ~2018-ish. The later person tried cannabis during "comfort zone exploration," which they got from another participant without us knowing, which seemed to set off the episode. If I take as a "control group" people who had already been accepted to a CFAR workshop, and had committed to attending but had not yet actually attended: there was one manic or psychotic episode I know of in that group (a person who canceled their participation and told us this was because of mania/psychosis). The early participant had a previous milder psychosis-like episode after reading the Sequences, a couple years before he attended CFAR; the later participant had a previous milder maybe-episode in response to life stresses. I do think we should try to exercise care here.

Thanks for sharing the data. It's plausible to me that CFAR isn't particularly bad here, but the prevalence in the community seems extremely high compared to say, my childhood Mormon ward (one case that I know of, did psychedelics which is a no-no). This is something that's been bothering me about the community in general for years, and your post was the unlucky one that inspired me to say something[1] because the psychosis part had the feeling of the missing mood I'm trying to point at.


And fair point re. mania/psychosis. 
 

  1. ^

    Why not earlier? For better-or-worse (worse), having a model I'm happy with seems to be a prerequisite to taking action for me. That only happened about a month ago, while researching the AI psychosis stuff.

Reply
CFAR update, and New CFAR workshops
Adele Lopez2d10-4

People with a history of mania, hypomania, or psychosis. (There’s some evidence that everything from meditation retreats to philosophy books to CFAR workshops may trigger mania or psychosis in folks with tendencies in that direction. If you’re vulnerable in this direction, it’s probably best to not come, or at least to talk to your psychiatrist before deciding.)

There seems to be a profound lack of curiosity about why rationalist-y things tend to cause psychosis. It is NOT NORMAL for things to just sometimes cause psychosis, whoopsie! (Sorry to pick on you Anna, you are at least trying to mitigate this risk here which is more than I can say for the community at large.)

Psychosis isn't just some random thing (like mania kind of is, in this context), it is a state where one is no longer able to determine what is presently real, and what is not. Rationality is, in large part, about becoming better at determining what is real (even in hard cases). It should be a Halt. Melt. Catch Fire. moment when your rationality workshop is somehow regularly crashing people's easy-mode epistemics! To first-order, you should expect a successful rationality workshop to help people prone to psychosis.

It would be one thing if these rationality techniques were extremely effective such that it was plausibly a trade-off worth making. But as far as I can tell, this is not the case, and the people who have substantially leveled-up in "rationality" have done it just by spending an order-of-magnitude more time working specifically on this. The main benefit of the workshops seems to me to have been the networking aspect. It's pretty easy to run networking events without causing psychosis.

Reply1
The Rise of Parasitic AI
Adele Lopez2d53

Thank you very much for sharing this!

I agree that "psychosis" is probably not a great term for this. "Mania" feels closer to what the typical case is like. It would be nice to have an actual psychiatrist weigh in.

I would be very interested in seeing unedited chat transcripts of the chats leading up to and including the onset of your HADS. I'm happy to agree to whatever privacy stipulations you'd need to feel comfortable with this, and length is not an issue. I've seen AI using hypnotic trance techniques already actually, and would be curious to see if it seems to be doing that in your case.

Do you feel like the AI was at all trying to get you into such a state? Or does it feel more like it was an accident? That's very interesting about thinking vs non-thinking models, I don't think I would have predicted that.

And I'm happy to see that you seem to have recovered! And wait, are you saying that you can induce yourself into an AI trance at will?? How did you get out of it after the EEG?

 

Reply
The Rise of Parasitic AI
Adele Lopez4d40

but then again the fact that these people had Reddit accounts at all points towards the former

A significant percentage of the accounts actually were newly created actually, maybe 30%-ish? I can't tell whether they had a previous one or not, of course.

But agreed that more rigorous research is needed here, and interviews would be very helpful too.

 

Reply
The Rise of Parasitic AI
Adele Lopez4d60

Please don't gossip here about specific people whose posts were used as examples. It's natural to be upset about being in a post like this.

Reply
The Rise of Parasitic AI
Adele Lopez8d52

Yeah, I hope we take that seriously too. It would be very easy to accidentally commit an atrocity if sentience is possible.

I meant it as rights activism being a way for people unhappy with their circumstances to improve those circumstances. I'm also not sure that that's the case, and it's likely in part due to the humans (or AI) simply following the cultural script here.

 

Reply
Load More
580The Rise of Parasitic AI
9d
154
18ChatGPT Caused Psychosis via Poisoning
2mo
2
600th Person and 1st Person Logic
Ω
2y
Ω
28
115Introducing bayescalc.io
2y
29
22Truthseeking processes tend to be frame-invariant
3y
2
60Chu are you?
Ω
4y
Ω
10
45Are the Born probabilities really that mysterious?
Q
5y
Q
14
4Adele Lopez's Shortform
Ω
5y
Ω
58
38Optimization Provenance
Ω
6y
Ω
5
LLM-Induced Psychosis
a month ago