You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion

5 Post author: chaosmage 01 August 2013 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 02 August 2013 12:38:08PM 1 point [-]

(Terminological nitpick: You can't usually solve problems by using different definitions.)

sort of stuff it's supposed to be Friendly to

Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn't decide on object level what's "Friendly". See also Complex Value Systems are Required to Realize Valuable Futures.