Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Rika21

Nice, this is a really nice framework for a useful pattern that I've found myself using.

So, this seems to be based heavily off of Focussing - and one of the central tenets of Focussing, is to allow a feeling to express itself in its own terms, before trying to box it into a specific narrative. Personally, I've found this to be very helpful, and also, the hardest aspect of Focussing.
When a negative emotion comes up, it's incredibly hard to avoid instincts to declare "this emotion is wrong, I'm going to avoid it" or "this emotion is right, I'm going to dwell on it".

This seems to have a little conflict with HEAL - which imposes a non-trivial degree of prescribed, and rigid narrative. Particularly in the Linking step; 
A narrative like "The positive experience should be kept in the foreground, and the negative experience should be held in the background" is often true - but I think, trying to rush a feeling into that particular narrative, before you've really addressed it on its own terms, can be risky. 
i.e; leading to repression, or just not being able to address nuanced drivers behind where that feeling is coming from.

I think, because of this, I'd avoid using HEAL on topics that I feel I haven't fully untangled my feelings on yet. Or at least, to be very sure to double-check any feelings of dissonance or discomfort that I have during the process.

I'm interested in whether this concern seems legitimate? You seem to have used it more, and more consciously, than I have - so I'd love to have some perspective on it.

Rika10

People don't have images of AI apocalypse

Worse yet, and probably more common, is having an image of an AI apocalypse, that came from irrational, or distorted sources.

Having a very clear image of an obviously fictional AI apocalypse, which your mind very easily jumps to whenever you hear people talking about X-risks, is often far more thought-limiting than having no preconceived image at all.

This was the main hurdle I had to believing in AI doom - I didn't have any coherent argument against it, and I found the doomy arguments pretty convincing. But the conclusion just sounded silly.
I'd fall back on talking points like "Well in the 1800s, people who believed in sci-fi narratives like you do, thought that electricity would ressurect the dead, and we'd be punished for playing god. You shouldn't take these paranoias so seriously."

(This is why I, and several other people I know, intentionally avoid evoking sci-fi-associated imagery when talking about AI)

Answer by Rika10

If someone thinks that violence against AI labs is bad, then they will make it a taboo because they think it is bad, and they don't want violent ideas to spread. 
There are a lot of interesting discussions to be had on why one believes this category of violence to be bad, and you can argue against these perspectives in a fairly neutral-sounding, non-stressful way, quite easily, if you know how to phrase yourself well. 
A lot (although not all) people are fairly open to this.

 

If someone thinks that violence against AI labs is good, then they probably really wouldn't want you talking about it on a publicly accessible, fairly well-known website. It's a very bad strategy from most pro-violence perspectives.

 

I'm going to quite strongly suggest, regardless of anyone's perspectives on this topic, that you probably shouldn't discuss it here - there are very few angles from which this could be imagined to be a good thing for any rationalism-associated person/movement. Or at least that you put a lot of thought into how you talk about it. Optics are a real and valuable thing, as annoying as that is.
Even certain styles of discussing anti-violence can come across as optically weird if you phrase yourself in certain ways.

Rika10

Yep this feels right to me! I think we agree on pretty much everything about this.

My main concern is that your post as-is could be misinterpreted as being along the lines of "Don't try to influence groups - only try to influence individuals manually, one at a time". It'd take a pretty extreme misinterpreter to take this to the full extent, but it could still be a negative influence on peoples' ability to deal with groups of people in effective ways.

Perhaps a good way of putting this is;

  • Mob & Bailey scenario: I am talking with X social group, which can consistently be modelled as a person-esque agent
  • Potential misinterpretation of your post: I am talking with individuals one at a time, and modelling these discussions as being part of a broader social structure is bad
  • A modelling I'd propose: I am talking with X social group, which can be modelled as a machine, with components of varying functions, comprised of people
Rika30

This is a great point, and very nicely made - but I do think it avoids the topic of why people end up in these styles of argument in the first place. 
I think there would be more value in discussing How to deal with Mob & Bailey situations once they arise rather than How to stop Mob & Bailey situations from arising.

You point out, correctly, that Mob & Bailey situations tend to occur when one is overly anthropomorphising a group of people, as though that group were an individual person.
The real problem is, that there are situations where it really is useful to act that way for pragmatic reasons.

At least in my experience, Mob & Bailey arguments tend to happen in fairly broad discussions about group behaviours - perhaps about ideologies, or social practices between groups. 
These are situations where it is very useful and important to be able to try and address an entire group's collective behaviour, moreso than addressing how individuals act and rationalise their decisions.

In these cases, what we're really trying to discuss is the mechanics of how groups are organised, rather than any one individual's beliefs. 
If we're arguing with a Tautology Club which breaks university rules, then talking with the club president does make sense. If we're arguing with a military, then talking with the Secretary of Defense isn't a bad place to start - but perhaps investigating middle-management positions would be more practical. 

I'd love to read further ideas more along the lines of "How to deal with Mob & Bailey situations", because I think the results can be quite different depending on which structure of social group you're arguing with.

Rika20

Interesting idea, but it seems risky.
Would life be the only, or for that matter, even the primary, complex system that such an AI would avoid interfering with?

Further, it seems likely that a curiosity-based AI might intentionally create or seek out complexity, which could be risky.
Think of how kids love to say "I want to go to the moon!" "I want to go to every country in the world!". I mean, I do too and I'm an adult. Surely a curiosity-based AI would attempt to go to fairly extreme limits for the sake of satiating its own curiosity, at the expense of other values.

Maybe such an AGI could have like... an allowance? "Never spend more than 1% of your resources on a single project" or something? But I have absolutely no idea how you could define a consistent idea of a "single project".