You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lessdazed comments on Q&A with new Executive Director of Singularity Institute - Less Wrong Discussion

26 Post author: lukeprog 07 November 2011 04:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (177)

You are viewing a single comment's thread.

Comment author: lessdazed 07 November 2011 06:22:10PM 5 points [-]

Why is there so much focus on the potential benefits to humanity of a FAI, as against our present situation?

An FAI becomes a singleton and prevents a paperclip maximizer from arising. Anyone who doesn't think a UAI in a box is dangerous will undoubtedly realize that an intelligent enough UAI could cure cancer, etc.

If a person is concerned about UAI, they are more or less sold on the need for Friendliness.

If a person is not concerned about UAI, they will not think potential benefits of a FAI are greater than those of a UAI in a box, or a UAI developed through reinforcement learning, etc. so there is no need to discuss the benefits to humanity of a superintelligence.