JGWeissman comments on Building toward a Friendly AI team - Less Wrong

24 Post author: lukeprog 06 June 2012 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 06 June 2012 09:39:15PM *  2 points [-]

should have a background in human psychology, as this is highly relevant to figuring out the Friendly utility function

My current opinion is that it's completely irrelevant. The typical tools developed around the study of human psychology are vastly less accurate than necessary to do the job. Background in mathematics, physics or machine learning seems potentially much more relevant, specifically for the problem of figuring out human goals and not just for other AI-related problems.

Comment author: JGWeissman 06 June 2012 09:49:27PM 0 points [-]

I agree that the "typical tools developed around the study of human psychology are vastly less accurate than necessary to do the job", but it still seems like figuring out what humans value is a problem of human psychology. I don't see how theoretical physics has anything to do with it.

Comment author: Vladimir_Nesov 06 June 2012 09:59:51PM *  2 points [-]

Whether it's a "problem of human psychology" is a question of assigning an area-of-study label to the problem. The area-of-study characteristic doesn't seem to particularly help with finding methods appropriate for solving the problem in this case. So I propose to focus on the other characteristics of the problem, namely the necessary rigor in an acceptable solution and the potential difficulty of the concepts necessary to formulate the solution (in the study of a real-world phenomenon). These characteristics match mathematics and physics best (probably more mathematics than physics).

Comment author: JGWeissman 06 June 2012 10:11:21PM 1 point [-]

I would expect all FAI team members to have strong math skills in addition to whatever other background they may have, and I expect them to approach the psychological aspects of the problem with greater rigor than is typical of mainstream psychology, and that their math backgrounds will contribute to this. But I think that mainstream psychology would be of some use to them, even if just to provide some concepts to be explored more rigorously.

Comment author: prashantsohani 09 June 2012 05:50:08PM *  0 points [-]

the potential difficulty of the concepts necessary to formulate the solution

As I see it, there might be considerable difficulty of concepts in formulating even the exact problem statement. For instance, given that we want a 'friendly' AI; our problem statement very much depends on our notion of friendliness; hence the necessity of including psychology.

Going further, considering that SI aims to minimize AI risk, we need to be clear on which AI behavior is said to constitute a 'risk'. If I remember correctly, the AI in the movie "I-robot" inevitably concludes that killing the human race is the only way to save the planet. The definition of risk in such a scenario is a very delicate problem.