Why do we imagine our actions could have consequences for more than a few million years into the future?
Unless what we believe about evolution is wrong, or UFAI is unlikely, or we are very very lucky, we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.
Therefore, justifications for harming people on Earth today in the name of protecting the entire universe over all time from UFAI in the future, like this one, should not be done. Our default assumption should be that the offspring of Earth will at best have a short happy life.
ADDED: If you observe, as many have, that Earth has not yet been assimilated, you can draw one of these conclusions:
- The odds of intelligent life developing on a planet are precisely balanced with the number of suitable planets in our galaxy, such that after billions of years, there is exactly one such instance. This is an extremely low-probability argument. The anthropic argument does not justify this as easily as it justifies observing one low-probability creation of intelligent life.
- The progression (intelligent life →AI→expansion and assimilation) is unlikely.
Surely, for a Bayesian, the more reasonable conclusion is number 2! Conclusion 1 has priors we can estimate numerically. Conclusion 2 has priors we know very little about.
To say, "I am so confident in my beliefs about what a superintelligent AI will do, that I consider it more likely that I live on an astronomically lucky planet, than that those beliefs are wrong", is something I might come up with if asked to draw a caricature of irrationality.
As I understand the usage elsewhere on this site, a Friendly AI created by nonhumans ought to embody the terminal values of the creating race, just as we talk about FAIs created by humans embodying the terminal values of humans.
And presumably the same reasoning that concludes that a self-optimizing AI, unless created taking exquisite care to ensure Friendliness, won't actually be compatible with human values (due to the Vastness of mindspace and so forth), also concludes that a powerful alien race, having necessarily been created without taking such care, will similarly not be compatible with human values.
This line of reasoning seems to conclude that drawing a distinction between alien UFAIs and alien FAIs (and, for that matter, alien NIs) is moot in this context -- they are all a threat to humanity.
Which, yes, leads to exactly the same default assumption you cite.
There is some hope 1. ) the absence of any paperclippers out there is evidence that the idea of AI going FOOM is bogus 2.) our FAI will make a deal with other FAI's.
I agree that friendliness is subjective. CEV of humanity will equal paperclipping for most minds and even be disregarded by some human minds.