You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

HungryHobo comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread.

Comment author: HungryHobo 28 July 2015 12:06:10PM 1 point [-]

It doesn't matter how safe you are about AI if there's a million other civilizations in the universe and some non-trivial portion of them aren't being as careful as they should be.

A UFAI is unlikely to stop at the home planet of the civilization that creates it. Rather you'd expect such a thing to continue to convert the remainder of the universe into computronium to store the integer for it's fitness function or some similar doomsday scenario.

AI doesn't work as a filter because it's the kind of disaster likely to keep spreading and we'd expect to see large parts of the sky going dark as the stars get turned into pictures of smiling faces or computronium.

Which either argues for AI-risk not being so risky or for an early filter causing few civilisations.

Comment author: turchin 28 July 2015 11:35:03PM 1 point [-]

That is why I am against premature SETI. But also if AI nanobots spread with near light speed, you can't see black spots in the sky.