Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

JulianMorrison comments on Sympathetic Minds - Less Wrong

25 Post author: Eliezer_Yudkowsky 19 January 2009 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

Sort By: Old

You are viewing a single comment's thread.

Comment author: JulianMorrison 19 January 2009 02:22:09PM 2 points [-]

I don't think a merely unsympathetic alien need be amoral or dishonest - they might have worked out a system of selfish ethics or a clan honor/obligation system. They'd need something to stop their society atomizing. They'd be nasty and merciless and exploitative, but it's possible you could shake appendages on a deal and trust them to fulfill it.

What would make a maximizer scary is that its prime directive completely bans sympathy or honor in the general case. If it's nice, it's lying. If you think you have a deal, it's lying. It might be lying well enough to build a valid sympathetic mind as a false face - it isn't reinforced by even its own pain. If you meet a maximizer, open fire in lieu of "hello".

Comment author: DanielLC 07 February 2013 04:00:09AM 4 points [-]

What makes a maximizer scary is that it's also powerful. A paperclip maximizer that couldn't overpower humans would work with humans. We would both benefit.

Of course, it would still probably be a bit creepy, but it's not going to be any less beneficial than a human trading partner.

Comment author: JulianMorrison 11 February 2013 10:22:06PM 4 points [-]

Not unless you like working with an utterly driven monomaniac perfect psychopath. It would always, always be "cannot overpower humans yet". One slip, and it would turn on you without missing a beat. No deal. Open fire.

Comment author: DanielLC 11 February 2013 11:32:55PM 2 points [-]

I would consider almost powerful enough to overpower humanity "powerful". I meant something closer to human-level.

Comment author: JulianMorrison 12 February 2013 10:46:11PM 0 points [-]

Now learn the Portia trick, and don't be so sure that you can judge power in a mind that doesn't share our evolutionary history.

Also watch the Alien movies, because those aren't bad models of what a maximizer would be like if it was somewhere between animalistic and closely subhuman. Xenomorphs are basically xenomorph-maximizers. In the fourth movie, the scientists try to cut a deal. The xenomorph queen plays along - until she doesn't. She's always, always plotting. Not evil, just purposeful with purposes that are inimical to ours. (I know, generalizing from fictional evidence - this isn't evidence, it's a model to give you an emotional grasp.)

Comment author: DanielLC 13 February 2013 06:52:46AM 0 points [-]

Now learn the Portia trick, and don't be so sure that you can judge power in a mind that doesn't share our evolutionary history.

Okay. What's scary is that it might be powerful.

The xenomorph queen plays along - until she doesn't.

And how well does she do? How well would she have done had she cooperated from the beginning?

I haven't watched the movies. I suppose it's possible that the humans would just never be willing to cooperate with Xenomorphs on a large scale, but I doubt that.

Comment author: AlexanderRM 02 October 2015 10:54:43PM 0 points [-]

The thing is, in evolutionary terms, humans were human-maximizers. To use a more direct example, a lot of empires throughout history have been empire-maximizers. Now, a true maximizer would probably turn on allies (or neutrals) faster than a human or a human tribe or human state would- although I think part of the constraints on that with human evolution are 1. it being difficult to constantly check if it's worth it to betray your allies, and 2. it being risky to try when you're just barely past the point where you think it's worth it. Also there's the other humans/other nations around, which might or might not apply in interstellar politics.

...although I've just reminded myself that this discussion is largely pointless anyway, since the chance of encountering aliens close enough to play politics with is really tiny, and so is the chance of inventing an AI we could play politics with. The closest things we have a significant chance of encountering are a first-strike-wins situation, or a MAD situation (which I define as "first strike would win but the other side can see it coming and retaliate"), both of which change the dynamics drastically. (I suppose it's valid in first-strike-wins, except in that situation the other side will never tell you their opinion on morality, and you're unlikely to know with certainty that the other side is an optimizer without them telling you)

Comment author: ialdabaoth 25 March 2013 01:33:52AM 2 points [-]

If you meet a maximizer, open fire in lieu of "hello".

Which is why a "Friendly" AI needs to be a meta-maximizer, rather than a mere first-order maximizer. In order for an AI to be "friendly", it needs to recognize a set of beings whose utility functions it wishes to maximize, as the inputs to its own utility function.

Comment author: Nisan 25 March 2013 03:11:12AM 0 points [-]

"If you meet an optimizer on the road, kill it"?