You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Benja comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion

5 Post author: chaosmage 01 August 2013 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Benja 01 August 2013 10:41:21PM 6 points [-]

I disagree. Compared to UFAIs, FAIs must by definition have a more limited range of options. Why would the difference be negligible?

Even if that were true (which I don't see: like FAIs, uFAIs will have goals they are trying to maximize, and their options will be limited to those not in conflict with those goals): Why on Earth would this difference take the form of "given millions of years, you can't colonize the galaxy"? And moreover, why would it reliably have taken this form for every single civilization that has arisen in the past? We'd certainly expect an FAI built by humanity to go to the stars!

Comment author: chaosmage 02 August 2013 12:04:08AM 1 point [-]

I'm not saying that it can't, I'm saying it surely would. I just think it is much easier, and therefore much more probable, for a simple self-replicating cancer-like self-maximizer to claim many resources, than for an AI with continued pre-superintelligent interference.

Overall, I believe it is more likely we're indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.

Comment author: ESRogs 06 August 2013 06:39:17AM 0 points [-]

Overall, I believe it is more likely we're indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.

This seems like a perfectly reasonable claim. But the claim that the Fermi paradox argues more strongly against the existence of nearby UFAIs than FAIs doesn't seem well-supported. If there are nearby FAIs you have the problem of theodicy.

I should note that I'm not sure what you mean about the pre-superintelligent interference part though, so I may be missing something.