Nominull comments on The mind-killer - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (151)
That's a very strange perspective. Other threats are good in that they are stupid, so they won't find you if you colonize space or live on an isolated island, or have a lucky combination of genes, or figure out a way to actively outsmart them, etc. Stupid existential risks won't methodically exterminate every human, and so there is a chance for recovery. Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet. (Indifference works this way too, it's the application of power indifferent to humankind that is methodical, e.g. Paperclip AI.)
Consider: humanity is an intelligence, one not particularly friendly to, say, the fieldmouse. Fieldmice are not yet extinct.
I think it is worth considering the number of species to which humanity is largely indifferent which are extinct as a result of humanity optimizing other criteria
Humans satisfice, and not very well at that compared to what an AGI could do. If we effectively optimized for... almost any goal not referring to fieldmice... fieldmice would be extinct.
Humanity is weak.
Humanity is pretty damn impressive from a fieldmouse's perspective, I dare say!
yet humanity cannot create technology on the level of a fieldmouse.
Fieldmice (outside of Douglas Adams fiction) aren't any particular threat to us in the way we might be to the Unfriendly AI. They're not likely to program another us to fight us for resources.
If fieldmice were in danger of extinction we'd probably move to protect them, not that that would necessarily help them.