You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tim comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion

5 Post author: chaosmage 01 August 2013 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread.

Comment author: tim 01 August 2013 08:26:07PM 2 points [-]

Why are you assuming that we would be more likely to notice an unfriendly SI than a friendly SI? If anything, it seems that an intelligence we would consider friendly is more likely to cause us to observe life than one maximizing something completely orthogonal to our values.

(I don't buy the argument that an unfriendly SI would propagate throughout the universe to a greater degree than a friendly SI. Fully maximizing happiness/consciousness/etc also requires colonizing the galaxy.)

Comment author: chaosmage 01 August 2013 11:21:53PM 1 point [-]

Regardless of what it optimizes, it needs raw materials at least for its own self-improvement, and it can see them lying around everywhere.

We haven't noticed anyone, friendly or unfriendly. We don't know if any friendly ones noticed us, but we know that no unfriendly ones did.