Stuart_Armstrong comments on An overall schema for the friendly AI problems: self-referential convergence criteria - Less Wrong

17 Post author: Stuart_Armstrong 13 July 2015 03:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (110)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 21 July 2015 01:49:43PM 0 points [-]

What you are trying to do is import positive features from the convergence of human groups (eg the fact that more options are likely to have been considered, the fact that productive discussion is likely to have happened...) into the convergence of AI groups, without spelling them out precisely. Unless we have a clear handle on what, among humans, causes these positive features, we have no real reason to suspect they will happen in AI groups as well.

Comment author: TheAncientGeek 21 July 2015 04:49:25PM *  0 points [-]

The two concrete examples you gave weren't what I had in mind. I was addressing the problem of an AI "losing" values during extrapolation,and it looks like a real reason to me. If you want to prevent an AI undergoing value drift during extrapolation, keep an extrapolated one as a reference. Two is a group minimally.

There may well be other advantages to doing rationality and ethics in groups, and yes, that needs research, and no, that isnt a show stopper.