KatjaGrace comments on Superintelligence 21: Value learning - Less Wrong

7 Post author: KatjaGrace 03 February 2015 02:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread.

Comment author: KatjaGrace 03 February 2015 02:11:38AM 2 points [-]

Do you think the Hail Mary approach could produce much value?

Comment author: Sebastian_Hagen 03 February 2015 08:55:43PM *  2 points [-]

Perhaps. But it is a desperate move, both in terms of predictability and in terms of the likely mind crime that would result in its implementation, since the conceptually easiest and most accurate ways to model other civilizations would involve fully simulating the minds of their members.

If we had to do it, I would be much more interested in aiming it at slightly modified versions of humanity as opposed to utterly alien civilizations. If everyone in our civilization had taken AI safety more seriously, and we could have coordinated to wait a few hundred years to work out the issues before building one, what kind of AI would our civilization have produced? I suspect the major issue with this approach is formalizing "If everyone in our civilization had taken AI safety more seriously" for the purpose of aiming an HM-implementing AI at those possibilities in particular.