KatjaGrace comments on Superintelligence 21: Value learning - Less Wrong

7 Post author: KatjaGrace 03 February 2015 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread.

Comment author: KatjaGrace 03 February 2015 02:08:17AM 1 point [-]

In Bostrom's Hail Mary approach, why is it easier to get an AI to care about another AI's values than about another civilization's values? (p198)

Comment author: Sebastian_Hagen 03 February 2015 08:42:27PM *  2 points [-]

Powerful AIs are probably much more aware of their long-term goals and able to formalize them than a heterogenous civilization is. Deriving a comprehensive morality for post-humanity is really hard, and indeed CEV is designed to avoid the need of having humans do that. Doing it for an arbitrary alien civilization would likely not be any simpler.

Whereas with powerful AIs, you can just ask them which values they would like implemented and probably get a good answer, as proposed by Bostrom.