turchin comments on AI as a resolution to the Fermi Paradox. - Less Wrong

1 Post author: Raiden 02 March 2016 08:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: turchin 02 March 2016 10:08:48PM 1 point [-]

In my second point I meant original people, who created AI. Not all of the will be killed during creation and during AI halt. Many will survive and will be rather strong posthumans from our point of view. Just one instance of them is enough to start intelligence wave.

Another option is that AI may create nanobots capable to self-replicate in space, but not to star travel. But they anyway will jump from one comet to another randomly and in 1 billion year (arox) will colonise all Galaxy. We could search for such relicts in space. They may be rather benign from the risk point, just like mechanical plants.

Comment author: turchin 02 March 2016 10:18:28PM *  0 points [-]

Another option is that the only way an AI could survive halt risks is: or becoming crazy or using very strange optimisation method of problem solving. In this case it may be here, but we could not recognise it because it behavior is absurd from any rational point of view. I came to this idea when I explored an idea if UFOs may be alien AI with broken goal system. (I estimate it to be less than 1 per cent true, because both premises are unlikely: that UFOs is something real and that Alien AI exist but crazy). I wrote about it in my controversial manuscript "Unknown unknowns as existential risks", p.90.

https://www.scribd.com/doc/18221425/Unknown-unknowns-as-existential-risk-was-UFO-as-Global-Risk