rlsj comments on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities - Less Wrong

25 Post author: KatjaGrace 16 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (232)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 03 October 2014 11:56:16PM 1 point [-]

If you've solved the FAI problem, the device will change the world into what's right, not what you personally want. But of course, we should probably have a term of art for an AGI that will honestly follow the intentions of its human creator/operator whether or not those correspond to what's broadly ethical.

Comment author: cameroncowan 19 October 2014 04:36:21AM 0 points [-]

We need some kind of central ethical code and there are many principles that are transcultural enough to follow. However, how do we teach a machine to make judgment calls?

Comment author: Sebastian_Hagen 04 October 2014 04:03:43PM 0 points [-]

A lot of the technical issues are the same in both cases, and the solutions could be re-used. You need the AI to be capable of recursive self-improvement without compromising its goal systems, avoid the wireheading problem, etc. Even a lot of the workable content-level solutions (a mechanism to extract morality from a set of human minds) would probably be the same.

Where the problems differ, it's mostly in that the society-level FAI case is harder: there's additional subproblems like interpersonal disagreements to deal with. So I strongly suspect that if you have a society-level FAI solution, you could very easily hack it into an one-specific-human-FAI solution. But I could be wrong about that, and you're right that my original use of terminology was sloppy.