You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kawoomba comments on UFAI cannot be the Great Filter - Less Wrong Discussion

35 Post author: Thrasymachus 22 December 2012 11:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kawoomba 22 December 2012 07:05:16PM *  1 point [-]

You are correct; I cannot.

I did however, find plenty of refutations of precisely that argument, from the SI4 mailing list to various blogs. Related, Robin Hanson wrote this 2 years ago:

Let us call an AI unambitious if its values have no use for the rest of the universe. Then if the great filter is the main reason to think existential risks are likely, we should worry much more about unambitious unfriendly AI than just an unfriendly AI. Since designing an ambitious AI seems lots easier than designing a friendly one, maybe ambition should be the AI designer first priority.

I suppose that having seen some of those refutations, I falsely overestimated the importance of the argument that was being refuted:

I thought that to merit public refutations, there must be a certain number of people believing in it. If there are, I couldn't identify any.

Maybe the association occurs from "uFAI" being so closely related to "x-risk", and "x-risk" being so closely related to "the Great Filter". No transitivity this time.

Comment author: CarlShulman 22 December 2012 07:46:15PM 6 points [-]

Maybe the association occurs from "uFAI" being so closely related to "x-risk", and "x-risk" being so closely related to "the Great Filter". No transitivity this time.

I think this may cause confusion for some casual observers, so it's worth reiterating the refutation, but it's also worth noting that no one has seriously pressed the refuted argument.