You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

eli_sennesh comments on UFAI cannot be the Great Filter - Less Wrong Discussion

35 Post author: Thrasymachus 22 December 2012 11:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 30 May 2014 09:34:58AM -1 points [-]

A UFAI that doesn't go around eating stars to make paper-clips is probably already someone's attempted FAI. Bringing arbitrarily large sums of mass-energy and negentropy under one's control is a Basic AI Drive, so you have to program the utility function to actually penalize it.

Comment author: falenas108 30 May 2014 01:55:38PM -1 points [-]

Only if the AI has goals that both require additional energy, and don't have a small, bounded success condition.

For example, if an UFAI for humans has a goal that requires humans to be there, but is not allowed to create/lead to the creation of more, then if all humans are already dead it won't do anything.