Vladimir_Nesov comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (53)
(Terminological nitpick: You can't usually solve problems by using different definitions.)
Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn't decide on object level what's "Friendly". See also Complex Value Systems are Required to Realize Valuable Futures.