All of Rika's Comments + Replies

Rika21

Nice, this is a really nice framework for a useful pattern that I've found myself using.

So, this seems to be based heavily off of Focussing - and one of the central tenets of Focussing, is to allow a feeling to express itself in its own terms, before trying to box it into a specific narrative. Personally, I've found this to be very helpful, and also, the hardest aspect of Focussing.
When a negative emotion comes up, it's incredibly hard to avoid instincts to declare "this emotion is wrong, I'm going to avoid it" or "this emotion is right, I'm going to dwell... (read more)

1Raymond Koopmanschap
This concern seems legitimate to me. There is often important information in negative feelings which I mostly explore with Focusing or Internal Family Systems. When my brain thinks that there is still information in that feeling but I am nevertheless applying the HEAL method, it can feel as if I am convincing myself or, as you said, repressing your emotion. That seems indeed somewhat tricky to me.  However, I also think that there is not always useful information in negative feelings. I often had times where I already explored the negative feeling or knew where it came from but it still came back. Before having heard of HEAL and reading Rick Hanson's material I thought that you always had to fully experience and feel a negative feeling in order to overcome it. I no longer think this is the case, instead I replace it with thoughts or behaviors that are more helpful and in these behaviors also incorporate the useful bits of what the negative feeling tries to achieve. 
Rika10

People don't have images of AI apocalypse

Worse yet, and probably more common, is having an image of an AI apocalypse, that came from irrational, or distorted sources.

Having a very clear image of an obviously fictional AI apocalypse, which your mind very easily jumps to whenever you hear people talking about X-risks, is often far more thought-limiting than having no preconceived image at all.

This was the main hurdle I had to believing in AI doom - I didn't have any coherent argument against it, and I found the doomy arguments pretty convincing. But the conc... (read more)

Answer by Rika10

If someone thinks that violence against AI labs is bad, then they will make it a taboo because they think it is bad, and they don't want violent ideas to spread. 
There are a lot of interesting discussions to be had on why one believes this category of violence to be bad, and you can argue against these perspectives in a fairly neutral-sounding, non-stressful way, quite easily, if you know how to phrase yourself well. 
A lot (although not all) people are fairly open to this.

 

If someone thinks that violence against AI labs is good, then they pr... (read more)

Rika10

Yep this feels right to me! I think we agree on pretty much everything about this.

My main concern is that your post as-is could be misinterpreted as being along the lines of "Don't try to influence groups - only try to influence individuals manually, one at a time". It'd take a pretty extreme misinterpreter to take this to the full extent, but it could still be a negative influence on peoples' ability to deal with groups of people in effective ways.

Perhaps a good way of putting this is;

  • Mob & Bailey scenario: I am talking with X social group, which can
... (read more)
Rika30

This is a great point, and very nicely made - but I do think it avoids the topic of why people end up in these styles of argument in the first place. 
I think there would be more value in discussing How to deal with Mob & Bailey situations once they arise rather than How to stop Mob & Bailey situations from arising.

You point out, correctly, that Mob & Bailey situations tend to occur when one is overly anthropomorphising a group of people, as though that group were an individual person.
The real problem is, that there are situations where it ... (read more)

3Screwtape
I think I disagree with the prevalence of situations where it's really useful to act like a group is an individual person, but I'm not sure that's your claim exactly. It's possible we're in agreement. Step one of this essay is to crystalize the idea of the Mob, this crowd that can look united but is actually different once you look closer. The conversation between Amy, Bob, and Bella is a caricature but I have seen conversations that resembled it. Sometimes it feels like Twitter is designed to create them. Once the idea of the Mob & Bailey is in your toolbelt, then yeah, dealing with them once they've started is a useful topic (though on the small scale, you can catch yourself midway through an argument and go "Okay, hang on, I'm going to specifically address Bella for a moment here-") and it can segue into how organizations are structured. I claim the platonic Mob & Bailey is seen when there isn't a clear structure or where there are lots of people aligned but outside the formal structure, like a political party or a religious group. If I need to convince the United Nations to do something then maybe I start by drawing up their org chart (both formal and informal) but like, I don't think investigating the social structure of Deists as a group is going to be helpful.  Which, again, we might just agree on. If I want to talk the U.S. Military (a group) into doing something, then I might start by talking to the Secretary of Defense (a person) or I might start by talking to a middle-manager in charge of trainings (a person) or talking to an inventory manager (a person). The reductio ad absurdum version of talking to The U.S. Military (a group) might be standing on the front lawn of the Pentagon with a megaphone. That's unlikely to get me what I want. I've got some ideas on how to deal with talking to things like the U.S. MIlitary (not great ideas, but ideas) but the megaphone thing just doesn't work.
Rika20

Interesting idea, but it seems risky.
Would life be the only, or for that matter, even the primary, complex system that such an AI would avoid interfering with?

Further, it seems likely that a curiosity-based AI might intentionally create or seek out complexity, which could be risky.
Think of how kids love to say "I want to go to the moon!" "I want to go to every country in the world!". I mean, I do too and I'm an adult. Surely a curiosity-based AI would attempt to go to fairly extreme limits for the sake of satiating its own curiosity, at the expense of othe... (read more)

2MSRayne
Note, to be entirely clear, I'm not saying that this is anywhere near sufficient to align an AGI completely. Mostly it's just a mechanism of decreasing the chance of totally catastrophic misalignment, and encouraging it to be just really really destructive instead. I don't think curiosity alone is enough to prevent wreaking havoc, but I think it would lead to fitting the technical definition of alignment, which is that at least one human remains alive.