ialdabaoth comments on Sympathetic Minds - Less Wrong

24 Post author: Eliezer_Yudkowsky 19 January 2009 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: JulianMorrison 19 January 2009 02:22:09PM 2 points [-]

I don't think a merely unsympathetic alien need be amoral or dishonest - they might have worked out a system of selfish ethics or a clan honor/obligation system. They'd need something to stop their society atomizing. They'd be nasty and merciless and exploitative, but it's possible you could shake appendages on a deal and trust them to fulfill it.

What would make a maximizer scary is that its prime directive completely bans sympathy or honor in the general case. If it's nice, it's lying. If you think you have a deal, it's lying. It might be lying well enough to build a valid sympathetic mind as a false face - it isn't reinforced by even its own pain. If you meet a maximizer, open fire in lieu of "hello".

Comment author: ialdabaoth 25 March 2013 01:33:52AM 2 points [-]

If you meet a maximizer, open fire in lieu of "hello".

Which is why a "Friendly" AI needs to be a meta-maximizer, rather than a mere first-order maximizer. In order for an AI to be "friendly", it needs to recognize a set of beings whose utility functions it wishes to maximize, as the inputs to its own utility function.