Strange7 comments on How can I reduce existential risk from AI? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (92)
No, in general I've trained myself to operate, as much as possible, with an incredibly lean dopamine mixture. To hell with feeling good; I want to be able to push on no matter how bad I feel.
(As it turns out, I have limits, but I've mostly trained to push through those limits through shame and willpower rather than through reward mechanisms, to the point that reward mechanisms generally don't even really work on me anymore - at least, not to the level other people expect them to).
A lot of this was a direct decision, at a very young age, to never exploit primate dominance rituals or competitive zero-sum exchanges to get ahead. It's been horrific, but... the best metaphor I can give is from a story called "Those Who Walk Away From Omelas".
Essentially, you have a utopia, that is powered by the horrific torture and suffering of a single innocent child. At a certain age, everyone in the culture is explained how the utopia works, and given two choices: commit fully to making the utopia worth the cost of that kid's suffering, or walk away from utopia and brave the harsh world outside.
I tried to take a third path, and say "fuck it. Let the kid go and strap me in."
So in a sense, I suppose I tried to replace normal feel-good routines for a sort of smug moral superiority, but then I trained myself to see my own behavior as smug moral superiority so I wouldn't feel good about it. So, yeah.
Problem being that Omelas doesn't just require that /somebody/ be suffering; if it did, they'd probably take turns or something. It's some quality of that one kid.
Which is part of where the metaphor breaks down. In our world, our relative prosperity and status doesn't require that some specific, dehumanized 'Other' be exploited to maintain our own privilege - it merely requires that someone be identified as 'Other', that some kind of class distinction be created, and then natural human instincts take over and ensure that marginal power differentials are amplified into a horrific and hypocritical class structure. (Sure, it's a lot better than it was before, but that doesn't make it "good" by any stretch of the imagination).
I have no interest in earning money by exploiting the emotional tendencies of those less intelligent than me, so Ialdabaoth-sub-1990 drew a hard line around jobs (or tasks or alliances or friendships) that aid people who do such things.
More generally, Brent-sub-1981 came up with a devastating heuristic: "any time I experience a social situation where humans are cruel to me, I will perform a detailed analysis of the thought processes and behaviors that led to that social situation, and I will exclude myself from performing those processes and behaviors, even if they are advantageous to me."
It's the kernel to my "code of honor", and at this point it's virtually non-negotiable.
It is not, however, particularly good at "winning".