Comments

Really great post! 

It’s unclear how much human psychology can inform our understanding of AI motivations and relevant interventions but it does seem relevant that spitefulness correlates highly (Moshagen et al., 2018, Table 8, N  1,261) with several other “dark traits”, especially psychopathy (r = .74), sadism (r = .59), and Machiavellianism (r = .59). 

(Moshagen et al. (2018) therefore suggest that “[...] dark traits are specific manifestations of a general, basic dispositional behavioral tendency [...] to maximize one’s individual utility— disregarding, accepting, or malevolently provoking disutility for others—, accompanied by beliefs that serve as justifications.”)

Plausibly there are (for instance, evolutionary) reasons for why these traits correlate so strongly with each other, and perhaps better understanding them could inform interventions to reduce spite and other dark traits (cf. Lukas' comment). 

If this is correct, we might suspect that AIs that will exhibit spiteful preferences/behavior will also tend to exhibit other dark traits (and vice versa!), which may be action guiding. (For example, interventions that make AIs less likely to be psychopathic, sadistic, Machiavellian, etc. would also make them less spiteful, at least in expectation.)

Great post, thanks for writing! 

Most of this matches my experience pretty well. I think I had my best ideas during phases (others seem to agree) when I was unusually low on guilt- and obligation-driven EA/impact-focused motivation and was just playfully exploring ideas for fun and out of curiosity.

One problem with letting your research/ideas be guided by impact-focused thinking is that you basically train your mind to immediately ask yourself after entertaining a certain idea for a few seconds "well, is that actually impactful?". And basically all of the time, the answer is "well, probably not". This makes you disinclined to further explore the neighboring idea space. 

However, even really useful ideas / research angles start out being somewhat unpromising and full of hurdles and problems and need a lot of refinement. If you allow yourself to just explore idea space for fun, you might overcome these problems and stumble on something truly promising. But if you had been in an "obsessing about maximizing impact" mindset you would have given up too soon because, in this mindset, spending hours or even days without having any impact feels too terrible to keep going.

Thanks for this post, I thought this was useful. 

I needed a writing buddy to pick up the momentum to actually write it

I'd be interested in knowing more how this worked in practice (no worries if you don't feel like elaborating/don't have the time!). 

I think mostly I expect us to continue to overestimate the sanity and integrity of most of the world, then get fucked over like we got fucked over by OpenAI or FTX. I think there are ways to relating to the rest of the world that would be much better, but a naive update in the direction of "just trust other people more" would likely make things worse.

[...]
Again, I think the question you are raising is crucial, and I have giant warning flags about a bunch of the things that are going on (the foremost one is that it sure really is a time to reflect on your relation to the world when a very prominent member of your community just stole 8 billion dollars of innocent people's money and committed the largest fraud since Enron), [...]

I very much agree with the sentiment of the second paragraph. 

Regarding the first paragraph, my own take is that (many) EAs and rationalists might be wise to trust themselves and their allies less.[1]

The main update of the FTX fiasco (and other events I'll describe later) I'd make is that perhaps many/most EAs and rationalists aren't very good at character judgment.  They probably trust other EAs and rationalists too readily because they are part of the same tribe and automatically assume that agreeing with noble ideas in the abstract translates to noble behavior in practice. 

(To clarify, you personally seem to be good at character judgment, so this message is not directed at you. (I base that mostly on your comments I read about the SBF situation, big kudos for that, btw!)

It seems like a non-trivial fraction of people that joined the EA and rationalist community very early turned out to be of questionable character, and this wasn't noticed for years by large parts of the community. I have in mind people like Anissimov, Helm, Dill, SBF, Geoff Anders, arguably Vassar—these are just the known ones. Most of them were not just part of the movement, they were allowed to occupy highly influential positions. I don't know what the base rate for such people is in other movements—it's plausibly even higher—but as a whole our movements don't seem to be fantastic at spotting sketchy people quickly. (FWIW, my personal experiences with a sketchy, early EA (not on the above list) inspired this post.)

My own takeaway is that perhaps EAs and rationalists aren't that much better in terms of integrity than the outside world and—given that we probably have to coordinate with some people to get anything done—I'm now more willing to coordinate with "outsiders" than I was, say, eight years ago. 

 

  1. ^

    Though I would be hesitant to spread this message; the kinds of people who should trust themselves and their character judgment less are more likely the ones who will not take this message to heart, and vice versa.

This is mentioned in the introduction. 

I'm biased, of course, but it seems fine to write a post like this. (Similarly, it's fine for CFAR staff members to write a post about CFAR techniques. In fact, I prefer if precisely these people write such posts because they have the relevant expertise.)

Would you like us to add a more prominent disclaimer somewhere? (We worried that this might look like advertising.)

A quick look through https://www.goodtherapy.org/learn-about-therapy/types/compassion-focused-therapy gives an impression of yet another mix of CBT, DBT and ACT, nothing revolutionary or especially new, though maybe I missed something.

In my experience, ~nothing in this area is downright revolutionary. Most therapies are heavily influenced by previous concepts and techniques. (Personally, I'd still say that CFT brings something new to the table.)

I guess what matters if it works for you or not. 

Is this assertion borne out by twin studies? Or is believing it a test for CFT suitability only?

To some extent. Most human traits have a genetic component, including (Big-Five) personality traits, depressive tendencies, anxiety disorders, conduct disorders, personality disorders, and so on. (e.g., Polderman et al., 2015). This is also true for (self-)destructive tendencies like malevolent personality traits (citing my own summary of some studies here because I'm lazy, sorry).

(Also agree with Kaj's warning about misinterpreting heritability.)

More generally speaking, I'd say this belief is borne out of understanding evolutionary psychology/history. Basically, all of our motivations and fears have an evolutionary basis. We fear death, because the ancestors who didn't were eaten by lions. We fear being ostracized and care about being respected because in the Environment of Evolutionary Adaptedness our survival and reproductive success was dependent on our social status. Therefore, it's to be expected that most humans, at some point or another, worry about death or health problems or feel emotions like jealousy or envy. They don't have to be rooted in some trauma or early life experience—though they are usually exacerbated by them. In most cases, it's not realistic to eliminate such emotions entirely. This doesn't mean that one is an "abnormal" or "defective" person that experienced irreversible harm inflicted by another human sometime in one's development. (Just to be clear, as mentioned in the main text, no one believes that life experiences don't matter. Of course, they matter a great deal!)

But yeah, if you are skeptical of the above, it's a good reason to not seek a CFT therapist. 

From studying and using all of the above my conclusion is that IFS offers the most tractable approach to this issue of competing 'parts'. And in many ways the most powerful. 

In our experience, different people respond to different therapies. I know several people for whom, say, CFT worked better than IFS. Glad to hear that IFS worked for you!

When you read about modern therapies, they all borrow from one another in a way that did not occur say 50 years ago where there were very entrenched schools of thought.

Yes, that's definitely the case. My sense is that many people overestimate how revolutionary various therapies are because their founders downplay how many concepts and techniques they took from other modalities. (Though this can be advantageous because the "hype" increases motivation and probably fuels various self-fulfilling prophecies.)

For what it's worth, I read/skimmed all of the listed IDA explanations and found this post to be the best explanation of IDA and Debate (and how they relate to each other). So thanks a lot for writing this! 

Thanks a lot for this post (and the whole sequence), Kaj! I found it very helpful already. 
 
Below a question I first wanted to ask you via PM but others might also benefit from an elaboration on this. 

You describe the second step of the erasure sequence as follows (emphasis mine): 

>Activating, at the same time, the contradictory belief and having the experience of simultaneously believing in two different things which cannot both be true.

When I try this myself, I feel like I cannot actually experience two things simultaneously. There seems to be at least half a second or so between trying to hold the target schema in consciousness and focusing my attention on disconfirming knowledge or experiences. 

(Generally, I'd guess it's not actually possible to hold two distinct things in consciousness simultaneously, at least that's what I heard various meditation teachers (and perhaps also neuroscientists) claim; you might have even mentioned this in this sequence yourself, if I remember correctly. Relatedly, I heard the claim that multitasking actually involves rapid cycling of one's attention between various tasks, even though it feels from the inside like one is doing several things simultaneously.)

So should I try to minimize the duration between holding the target schema and disconfirming knowledge in consciousness (potentially aiming to literally feel as though I experience both things at once) or is it enough to just keep cycling back and forth between the two every few seconds? (If yes, what about, say, 30 seconds?) 

One issue I suspect I have is that there is a tradeoff between how vividly I can experience the target schema and how rapidly I'm cycling back to the disconfirming knowledge.

Or maybe I'm doing something wrong here? Admittedly, I haven't tried this for more than a minute or so before immediately proceeding to spending 5 minutes on formulating this question. :)

Load More