Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

DanielLC comments on Sympathetic Minds - Less Wrong

25 Post author: Eliezer_Yudkowsky 19 January 2009 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: DanielLC 11 February 2013 11:32:55PM 2 points [-]

I would consider almost powerful enough to overpower humanity "powerful". I meant something closer to human-level.

Comment author: JulianMorrison 12 February 2013 10:46:11PM 0 points [-]

Now learn the Portia trick, and don't be so sure that you can judge power in a mind that doesn't share our evolutionary history.

Also watch the Alien movies, because those aren't bad models of what a maximizer would be like if it was somewhere between animalistic and closely subhuman. Xenomorphs are basically xenomorph-maximizers. In the fourth movie, the scientists try to cut a deal. The xenomorph queen plays along - until she doesn't. She's always, always plotting. Not evil, just purposeful with purposes that are inimical to ours. (I know, generalizing from fictional evidence - this isn't evidence, it's a model to give you an emotional grasp.)

Comment author: DanielLC 13 February 2013 06:52:46AM 0 points [-]

Now learn the Portia trick, and don't be so sure that you can judge power in a mind that doesn't share our evolutionary history.

Okay. What's scary is that it might be powerful.

The xenomorph queen plays along - until she doesn't.

And how well does she do? How well would she have done had she cooperated from the beginning?

I haven't watched the movies. I suppose it's possible that the humans would just never be willing to cooperate with Xenomorphs on a large scale, but I doubt that.

Comment author: AlexanderRM 02 October 2015 10:54:43PM 0 points [-]

The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it's worth it to betray your allies, and 2. it being risky to try when you're just barely past the point where you think it's worth it. Also there's the other humans/other nations around, which might or might not apply in interstellar politics.

...although I've just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as "first strike would win but the other side can see it coming and retaliate"), both of which change the dynamics drastically. (I suppose it's valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you're unlikely to know with certainty that the other side is an optimizer without them telling you)