You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Kawoomba comments on UFAI cannot be the Great Filter - Less Wrong Discussion

35 Post author: Thrasymachus 22 December 2012 11:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread.

Comment author: Kawoomba 22 December 2012 11:36:22AM 2 points [-]

Good post, good explanation. I agree. I saw the recent comment on OB that probably sparked you making this topic, I was thinking of posting it fleetingly before akrasia kicked in. So, thanks.

A throwaway parenthesized remark from RH that nevertheless should be of major importance, because it lowers the credence we should assign to the argument that "UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring."

Comment author: CarlShulman 22 December 2012 04:20:15PM 6 points [-]

"because it lowers the credence we should assign to the argument that "UFAI is a good great filter candidate, and a great filter is a good explanation for the Fermi paradox, ergo we should raise our belief in the the verisimilitude of UFAI occurring.""

Can you identify some people who ever held or promoted this view? I don't know of any writers who have actually made this argument. It's pretty absurd on its face, basically saying that instead of there being super-convergence among biological civilizations not to colonize the galaxy, there is super-convergence among autonomous robotic civilizations not to colonize.

Comment author: Kawoomba 22 December 2012 07:05:16PM *  1 point [-]

You are correct; I cannot.

I did however, find plenty of refutations of precisely that argument, from the SI4 mailing list to various blogs. Related, Robin Hanson wrote this 2 years ago:

Let us call an AI unambitious if its values have no use for the rest of the universe. Then if the great filter is the main reason to think existential risks are likely, we should worry much more about unambitious unfriendly AI than just an unfriendly AI. Since designing an ambitious AI seems lots easier than designing a friendly one, maybe ambition should be the AI designer first priority.

I suppose that having seen some of those refutations, I falsely overestimated the importance of the argument that was being refuted:

I thought that to merit public refutations, there must be a certain number of people believing in it. If there are, I couldn't identify any.

Maybe the association occurs from "uFAI" being so closely related to "x-risk", and "x-risk" being so closely related to "the Great Filter". No transitivity this time.

Comment author: CarlShulman 22 December 2012 07:46:15PM 6 points [-]

Maybe the association occurs from "uFAI" being so closely related to "x-risk", and "x-risk" being so closely related to "the Great Filter". No transitivity this time.

I think this may cause confusion for some casual observers, so it's worth reiterating the refutation, but it's also worth noting that no one has seriously pressed the refuted argument.

Comment author: timtyler 26 December 2012 12:15:11AM *  0 points [-]

There are certainly some who think machine intelligence may account for the Fermi paradox. For instance, here's George Dvorsky on the topic. Also, the Wikipedia article on the Fermi paradox lists "a badly programmed super-intelligence" as a possible cause.

Comment author: CarlShulman 26 December 2012 02:37:44AM 0 points [-]

Thanks for the links Tim. Yes, it certainly gets included in exhaustive laundry lists of Fermi Paradox explanations (Dvorsky has covered many proposed Fermi Paradox solutions, including very dubious ones). The Fermi Paradox wiki page also includes the following weird explanation:

technological singularity...Theoretical civilizations of this sort may have advanced drastically enough to render communication impossible. The intelligences of a post-singularity civilization might require more information exchange than is possible through interstellar communication, for example.

Comment author: timtyler 22 December 2012 02:52:30PM *  5 points [-]

A throwaway parenthesized remark from RH that nevertheless should be of major importance [...]

Hang on, we've known this for years, right? This is not new information.