Why do we imagine our actions could have consequences for more than a few million years into the future?

Unless what we believe about evolution is wrong, or UFAI is unlikely, or we are very very lucky, we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.

Therefore, justifications for harming people on Earth today in the name of protecting the entire universe over all time from UFAI in the future, like this one, should not be done.  Our default assumption should be that the offspring of Earth will at best have a short happy life.

ADDED:  If you observe, as many have, that Earth has not yet been assimilated, you can draw one of these conclusions:

  1. The odds of intelligent life developing on a planet are precisely balanced with the number of suitable planets in our galaxy, such that after billions of years, there is exactly one such instance.  This is an extremely low-probability argument.  The anthropic argument does not justify this as easily as it justifies observing one low-probability creation of intelligent life.
  2. The progression (intelligent life →AI→expansion and assimilation) is unlikely.

Surely, for a Bayesian, the more reasonable conclusion is number 2!  Conclusion 1 has priors we can estimate numerically.  Conclusion 2 has priors we know very little about.

To say, "I am so confident in my beliefs about what a superintelligent AI will do, that I consider it more likely that I live on an astronomically lucky planet, than that those beliefs are wrong", is something I might come up with if asked to draw a caricature of irrationality.

New Comment


50 comments, sorted by Click to highlight new comments since:

we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy; and that they will assimilate us within a few million years.

Let's be Bayesian about this.

Observation: Earth has not been assimilated by UFAIs at any point in the last billion years or so. Otherwise life on Earth would be detectably different.

It is unlikely that there are no/few UFAIs in our galaxy/universe, but if they do exist it is unlikely that they would not already have assimilated us.

I don't have enough information to give exact probabilities, but it's a lot more likely than you seem to think that we will survive the next billion years without assimilation from an alien UFAI.

Personally, I think the most likely scenario is either that Earth is somehow special and intelligent life is rarer than we give it credit for; or that alien UFAIs are generally not interested in interstellar/intergalactic travel.

EDIT: More rigorously, let Uf be the event "Alien UFAIs are a threat to us", and Ap be the event "We exist today" (anthropic principle). The prior probability P(Uf) is large, by your arguments, but P(Ap given Uf) is much smaller than P(Ap given not-Uf). Since we observe Ap to be true, the posterior probability P(Uf given Ap) is fairly small.

Personally, I think the most likely scenario is either that Earth is somehow special and intelligent life is rarer than we give it credit for; or that alien UFAIs are generally not interested in interstellar/intergalactic travel.

Given the sort of numbers thrown about in Fermi arguments, believing the former would suggest you are outrageously overconfident in your certainty that your beliefs are correct about the likely activities of AIs. Surely the second conclusion is more reasonable?

Er... yes. But I don't think it undermines my point that we are unlikely to be assimilated by aliens in the near future.

Personally, I don't think it's at all unreasonable to assign probabilities of any particular planet developing intelligent life on orders of 10^-12 or lower.

I think we can reasonably conclude that Earth has not been assimilated at any point in its entire existence. If it had been assimilated in the distant past, it would not have continued to develop uninfluenced for the rest of its history, unless the AI's utility function were totally indifferent to our development. So we can extend the observed period over which we have not been assimilated back to a good four and a half billion years or so. The milky way is old enough that intelligent life could have developed well before our solar system ever formed, so we can consider that entire span to contain opportunities for assimilation comparable to those that exist today.

We could make a slightly weaker claim that no Strong AI has assimilated our portion of the galaxy since well before our solar system formed.

If we work from assumptions that make it likely for the universe to contain a "large number" of natural intelligences that go on to build UFAIs that assimilate on an interstellar or intergalactic level, then Earth would almost certainly have already been assimilated millions, even billions of years ago, and we accordingly would not be around to theorize.

After all, one needs only one species to make a single intergalacitc assimilating UFAI emerge somewhere in the Virgo Supercluster more than 110 million years ago to have assimilated the whole supercluster by now, using no physics we don't already know.

I suppose its possible that the first AI, and thus the oldest and most powerful, was a strange variety of paper-clipper that just wanted to tile the universe with huge balls of hydrogen undergoing nuclear fusion.

If this is true I'd be interested to know what our Supercluster looked like before.

Does anybody else on this board notice the similarities between speculations on the properties of AI and speculation on the properties of God? Will a friendly AI be able to protect us from unfriendly AIs if the friendly one is built first, locally?

Do we have strong evidence that we are NOT the paperclips of an AI? Would that be different from or the same as the creations of a god? Would we be able to tell the difference or would we only an observer outside the system be able to see the difference?

Does anybody else on this board notice the similarities between speculations on the properties of AI and speculation on the properties of God?

Why do you think Vernor Vinge dubbed AI, in one of his novels, "applied theology"?

Does anybody else on this board notice the similarities between speculations on the properties of AI and speculation on the properties of God?

Yes.

Do we have strong evidence that we are NOT the paperclips of an AI?

No, and I don't see how we could, given that any observation we make could always explained for as another part of its utility function. On the other hand we don't have any strong evidence for it, so it basically comes down to priors. Similar to the God debate but I think the question of priors may be more interesting here, I'd quite like to see an analysis of it if anyone has the spare time.

Would that be different from or the same as the creations of a god?

For a sufficiently broad definition of god, no, but I would say an AI wou;ld have some qualities not usually associated with God, particularly the quality of having been created by another agent.

Unless there is a good story about how a low complexity part of the universe developed/evolved into something that would eventually become/create an AI with a utility function that looks exactly like the universe we find ourselves our current story involving evolution from abiogenesis from the right mixture of chemicals and physical forces from the natural life and death of large, hot, collections of hydrogen, formed from an initially even distribution of matter seems far more parsimonious.

Beat me to it, it occurred to me just a bit ago that this ought to have been the main objection in my first comment. Among the premises that there are a large number of natural intelligences in our area of the universe, natural intelligences are likely to create strong AI, and strong AI are likely to expand to monopolize the surrounding space at relativistic speeds, we can conclude from our observations that at least one is almost certainly false.

Which is evidence against the possibility of AI going FOOM. I wrote this before:

The Fermi paradox does allow for and provide the only conclusions and data we can analyze that amount to empirical criticism of concepts like that of a Paperclip maximizer and general risks from superhuman AI's with non-human values without working directly on AGI to test those hypothesis ourselves. If you accept the premise that life is not unique and special then one other technological civilisation in the observable universe should be sufficient to leave potentially observable traces of technological tinkering. Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.

Unless what we believe about evolution is wrong, or we are very very lucky, we should assume there are already a large number of unfriendly AIs in the universe, and probably in our galaxy.

Doesn't this depend on the likelihood/prevalence of intelligent life? For all we know, we might be the only sentient species out there. Also, even if there are a lot o unfriendly AIs out there, a friendly one would still vastly improve our fate, whether by fighting off the unfriendly ones, or reaching an agreement with them to mutual benefit, or rescue simulations. The greater age of the alien ones might give them a massive advantage, but that depends on whether FOOM will ever run into diminishing returns. If it does, then the difference between, say, a 500,000 year old AI and a 2 million year old AI may not be much.

None of this should be construed as supporting the arguments in "Put all your eggs in one basket".

Also, even if there are a lot o unfriendly AIs out there, a friendly one would still vastly improve our fate, whether by fighting off the unfriendly ones, or reaching an agreement with them to mutual benefit, or rescue simulations.

One potential option might be to put us through quantum suicide - upload everyone into static storage, then let them out only in the branches where the UFAI is defeated. Depending on their values and rationality, both AIs could do the same thing in order to each have a universe to themselves without fighting. That could doom other sentients in the UFAI's branch, though, so it might not be the best option even if it would work.

Doesn't this depend on the likelihood/prevalence of intelligent life? For all we know, we might be the only sentient species out there.

For the record, that's Michael Anissimov's position (see in particular the last paragraph).

Also, even if there are a lot o unfriendly AIs out there, a friendly one would still vastly improve our fate, whether by fighting off the unfriendly ones, or reaching an agreement with them to mutual benefit, or rescue simulations.

So far as I understand "rescue simulations" in this context, I'd classify them as a particular detail of "a short happy life".

The greater age of the alien ones might give them a massive advantage, but that depends on whether FOOM will ever run into diminishing returns. If it does, then the difference between, say, a 500,000 year old AI and a 2 million year old AI may not be much.

I wouldn't expect there to ever be diminishing returns from acquiring more matter and energy.

For all we know, we might be the only sentient species out there.

I meant for that to fall under "what we believe about evolution [and the origin of life, which is technically not the same thing] is wrong".

We might be the only sentient species in the galaxy. This is not that improbable - since we know we are here and we don't see any aliens.

Either potential intelligent agents in the galaxy are ahead of us (where are they?), or they are behind us (yay, we are first!), or they emerged at around the same time as us (not terribly likely).

From that, either colonising the galaxy is harder than it looks - or, we're first.

From that, either colonising the galaxy is harder than it looks - or, we're first.

Harder, or less desirable. Yes; and isn't it more likely the former, than that we are (literally) astronomically lucky?

It isn't lucky to be the first in a galaxy.

Most intelligent species that evolve in a galaxy will be the first ones (on the assumption that intelligent agents typically conquer the galaxy they are in quickly, thereby suppressing the evolution of subsequent intelligent species).

So, being first would be ordinary, mundane, commonplace.

...and being second would be rare and unlucky.

That assumes that travel between galaxies is extremely difficult. Once a species has control over an entire galaxy, why wouldn't it then spread to other galaxies? Sure, it would be toughter than interstellar colonization, but the same basic problems apply.

Moreover, we don't see any largescale stellar engineering in other galaxies. Finding the signs of such would be much tougher in other galaxies rather than our own, but once a galaxy is under a species complete control, it is hard to see why they wouldn't go about modifying stars on a large scale. The notion that they would both not spread to other galaxies and wouldn't substantially modify their own is implausible.

Replace 'first in their galaxy' with 'first in their own past light-cone' and you not only remove the assumption that travel between galaxies is difficult, but also explain the fact that we haven't observed any. If we assume they travel at near-light speeds and consume anything they can reach then any point from which they could be observed has already been consumed.

Moreover, we don't see any largescale stellar engineering in other galaxies.

Some people do. There are quite a few serious-looking papers on that topic - though I don't have them handy.

I'd be very interested in references for them if you could find them. As far as I'm aware, there's been discussion about how to look for signs of engineering. There have been a handful of things that at first looked artificial and were then explained (pulsars being the most obvious example).

That assumes that travel between galaxies is extremely difficult.

Hmm - I can't see where I was assuming that.

If intergalactic travel is easy then the model of the first intelligent species in each galaxy taking over that galaxy fails to explain why we don't encounter other species. We should still expect species to spread far out.

I don't think his idea actually relies on intergalactic travel being difficult, just the way he stated it. If, say, intergalactic travel is easy, but not intercluster, then the idea could still apply, just at the galactic cluster level rather than at the galaxy level.

Either they would have arrived already (and taken over the galaxy, suppressing us in the process) - or they have yet to arrive. The chances of them initially showing up around about now are going to be pretty small.

They apparently haven't arrived here already - unless they seeded us originally - else where are they? So, if they are out there, they have yet to arrive.

I don't really see how all this makes much difference to the original argument.

The point of that argument was that seeing yourself alone in the galaxy with no aliens around is only to be expected - if the first intelligences rapidly expand and suppress the subsequent development of other intelligent life. So, being first is not so much of a miracle.

Your "ADDED" bit is nonsense.

The odds of intelligent, tool-using life developing could easily be so low that in the entire observable universe (all eighty billion galaxies of it) it only will happen once. This gives us at least ten orders of magnitude difference in possible probabilities, which is not even remotely "precisely balanced".

Earth being "lucky" is meaningless in this context. The whole point of the anthropic principle is that anyone capable of considering the prior improbability of a particular planet giving origin to intelligent life is absolutely certain to trace his origin to a planet that has a posterior probability of 1 of giving origin to intelligent life. If the conditions of the universe are such that only 1 planet in the lifetime of every 1 million galaxies will develop intelligent tool-using life, then we can expect about 80,000 intelligent tool-using species in the observable universe to observe that high infrequency. In none of those 80,000 cases will any member of those species be able to correctly conclude that the prior improbability of intelligence arising on his planet somehow proves that there should be more than 80,000 planets with intelligent species based on the posterior observation that his planet did indeed give origin to intelligent life.

Finally, your second option doesn't actually explain why Earth is not assimilated. If UFAI is highly improbable while natural tool-using intelligence is reasonably frequent, then Earth still should have been assimilated by unfriendly natural intelligence already. A hundred million years is more than enough time for a species to have successfully colonized the whole Milky Way, and the more such species that exist, the higher the probability that one actually would.

OK, this is my first day posting on Less Wrong. This topic "Don't Plan For the Future" interests me a lot and I have a few ideas on it. Yet it's been inactive for over a year. Possibilities that occur to me: (1) the subject has been taken up more definitively in a more recent thread, and I need to find it, (2) because of the time lag, I should start a new "Discussion" (I think I have more than 2 karma points already so it's at least possible) even if it's the same basic idea as this, (3) I should post ideas and considerations right here despite the time lag. If there's some guide that would answer this question, I'll happily take a pointer to that as well.

Welcome Bart. Thread necromancy is encouraged here. Go ahead and share your ideas!

As I understand the usage elsewhere on this site, a Friendly AI created by nonhumans ought to embody the terminal values of the creating race, just as we talk about FAIs created by humans embodying the terminal values of humans.

And presumably the same reasoning that concludes that a self-optimizing AI, unless created taking exquisite care to ensure Friendliness, won't actually be compatible with human values (due to the Vastness of mindspace and so forth), also concludes that a powerful alien race, having necessarily been created without taking such care, will similarly not be compatible with human values.

This line of reasoning seems to conclude that drawing a distinction between alien UFAIs and alien FAIs (and, for that matter, alien NIs) is moot in this context -- they are all a threat to humanity.

Which, yes, leads to exactly the same default assumption you cite.

There is some hope 1. ) the absence of any paperclippers out there is evidence that the idea of AI going FOOM is bogus 2.) our FAI will make a deal with other FAI's.

I agree that friendliness is subjective. CEV of humanity will equal paperclipping for most minds and even be disregarded by some human minds.

The universe may also contain a large number of rapidly expanding friendly AIs. The likelihood of one arising on this planet may correlate with the likelihood of it arising on other planets, although I'm not sure how strong a correlation we can suppose for intelligent life forms with completely different evolutionary histories. In that case, anything that increases the chance of friendly AI arising on this planet can also be taken to decrease our chance of being subsumed by an extraplanetary unfriendly AI.

An AI that is friendly to one intelligent species might not be friendly to others, but such AI should probably be considered to be imperfectly friendly.

This seems to be conflating a Friendly intelligence (that is, one constrained by its creators' terminal values) with a friendly one (that is, one that effectively signals the intent to engage in mutually beneficial social exchange).

As I said below, the reasoning used elsewhere on this site seems to conclude that a Friendly intelligence with nonhuman creators will not be Friendly to humans, since there's no reason to expect a nonhuman's terminal values to align with our own.

(Conversely, if there is some reason to expect nonhuman terminal values to align with human ones, then it may be worth clarifying that reason, as the same forces that make such convergence likely for natural-selection-generated NIs might also apply to human-generated AIs.)

I think that an AI whose values aligned perfectly with our own (or at least, my own) would have to assign value in its utility function to other intelligent beings. Supposing I created an AI that established a utopia for humans, but when it encountered extraterrestrial intelligences, subjected them to something they considered a fate worse than death, I would consider that to be a failing of my design.

Perfectly Friendly AI might be deserving of a category entirely to itself, since by its nature it seems that it would be a much harder problem to resolve even than ordinary Friendly AI.

If there is just one uFAI in the observable universe then we're done for because it will be technologically ahead of our FAI. It might be an uFAI that is acting locally but once it detects our FAI implementing CEV it will leave its current scope and abort it.

The odds of intelligent life developing on a planet are precisely balanced with the number of suitable planets in our galaxy, such that after billions of years, there is exactly one such instance.

You're not taking many-worlds into account here.

After I learned of MWI I felt the Fermi Paradox was more or less solved. Considering the number of near-misses we've already had in global nuclear war it seems likely that almost all intelligent species destroy themselves shortly after learning to crack the atom (including us). The universes where a single species managed to avoid self-annihilation will greatly out number the universes where multiple species managed to do so.

Assuming quantum fluctuations can have macro-level effects, of course.

I'm utterly bewildered by this comment.

If it seems likely that almost all intelligent species destroy themselves shortly after "cracking the atom," what more do you need? The absence of perceivable aliens is exactly what you should expect in that case.

What do MWI and quantum fluctuations have to do with it?

If MWI were wrong, it would raise the question "if our survival to this point was so improbable, how come we did survive?". Our existence would be evidence that we were wrong about the improbability.

Yes, what he said.

I'm still not really convinced there are actual macro-level differences between universes though, which kinda puts the whole thing back into doubt.

I'm not at all convinced that AIs will inevitably expand at near-light speed, but if they do, then it is still surprising that they haven't encountered us. This is of course just the Fermi paradox, but still provides some evidence that we might just be very lucky and at the beginning of the pack.

If AI's don't necessarily expand much, then our outlook is even better, and concentrating on FAI here on Earth is especially important.

Lastly, it might be unlikely, but it seems possible that a young FAI on Earth could protect us from an older UFAI.

Should we devote resources to trying to expand across the galaxy and thus influence events millions of years in the future? I say no.

I've been thinking about this question for many years, and it's just in the past few days I've learned about the Singularity. I don't at the moment assign a very high probability to that -- yes, I'm ignorant, but I'm going to leave it out for the moment.

Suppose we posit that from some currently unavailable combination of technology, physics, psychology, politics and economics (for starters) we can have "legs" and cover interstellar ground. We also crucially need a density of planets that can be exploited to create the vibrant economies that could launch other expensive spacecraft to fuel exponential growth. If we're going to expand using humans, we have to assume a rather high density of not just planets that can support intelligent life, but planets that can support our particular form of intelligent life -- earth-like planets. We have to assume that those planets have not evolved competent, intelligent life of their own -- even if they are far behind us technologically, their inherent advantages of logistics could very well keep us from displacing them. But on the plus side, it also seems highly likely that if we can get such a process of exponential growth going in our corner of the galaxy, it could then be expanded throughout our galaxy (at the least).

If we can do it, so can they -- actually, they already did.

To expand that, I attach great importance to the fallacy of human exceptionalism. Over history we've had to give up beliefs about cultural and racial superiority, humans being fundamentally different from animals, the earth being the center of the universe, the sun being the center of the universe... The list is familiar.

We've discovered stars with planets. Perhaps fewer have small, rocky (non-gas giant) planets than theories initially suggested, but there are a few (last I knew) and that's just a small adjustment in our calculations. We have no evidence whatsoever that our solar system is exceptional on the scale of the galaxy -- there are surely many millions of rocky planets (a recent news story suggests billions).

Just how improbable is the development of intelligent life? I'd be interested to know how much deep expertise in biology we have in this group. The 2011 survey results say 174 people (16%) in the hard sciences, with some small fraction of that biologists? I claim no expertise, but can only offer what (I think) I know.

First, I'd heard it guessed that life developed on earth just about as soon as the earth was cool enough to allow its survival. Second, evolution has produced many of its features multiple times. This seems to bear on how likely evolution elsewhere is to develop various characteristics. If complicated ones like wings and eyes and (a fair amount of) intelligence evolved independently several times, then it wasn't just some miraculous fluke. It makes such developments in life on other planets seem far more probable. Third, the current time in earth history does not have a special status. If intelligent life hadn't evolved on earth now, it had a few billion more years to happen.

Based on those considerations, I consider it a near certainty that alien civilizations have developed -- I'd guess many thousands in our galaxy as a minimum. It's a familiar argument that we should assume we are in the middle of such a pack temporally, so at the least hundreds of civilizations started millions of years ago. If expansion across the galaxy was possible, they'd be here by now. The fact that we have detected no signals from SETI says virtually nothing -- that just means there is nobody in our immediate vicinity who is broadcasting right now.

Since we haven't observed any alien presence on earth, we would have to assume that civilization expansion is not independent -- some dominant civilization suppresses others. There are various possibilities as to the characteristics of that one civilization. They might want to remain hidden. They might not interfere until a civilization grows powerful enough to start sending out colonies to other worlds. Perhaps they just observe us indefinitely and only interfere if we threaten their own values. Even in some benign confederation, where all the civilizations share what they have to offer, we would offer just one tiny drop to a bucket formed from -- what, millions? -- of other civilizations. What all of these have in common is that it is not our values that dominate the future: it's theirs.

It seems likely to me that my initial assumption about exponential space colonization is wrong. It is unfashionable in futurist circles to suggest something is impossible, especially something like sending colonists to other planets, something that doesn't actually require updates to our understanding of the laws of physics. Critics point out all the other times someone said something was impossible, and it turned out that it could be done. But that is very different from saying that everything that seems remotely plausible can in fact be done. If I argued against interstellar colonization based on technical difficulties, that would be a weak argument. My argument is based on the fact that if it were possible, the other civilizations would be here already.

This argument extends to the colonization potential of robots produced in the aftermath of the Singularity. If their robots could do it, they'd be here already.

To achieve the huge win that would make such an expensive, difficult project worthwhile, exponential space colonization has to be possible, and we have to be the first ones. I think both are separately highly unlikely, and in combination astronomically unlikely.

Hmmmm. Nearly two days and no feedback other than a "-1" net vote. Brainstorming explanations:

  1. There is so much wrong with it no one sees any point in engaging me (or educating me).
  2. It is invisible to most people for some reason.
  3. Newbies post things out of synch with accepted LW thinking all the time (related to #1)
  4. No one's interested in the topic any more.
  5. The conclusion is not a place anyone wants to go.
  6. The encouragement to thread necromancy was a small minority view or intended ironically.
  7. More broadly, there are customs of LW that I don't understand.
  8. Something else.

Likely, few people read it, maybe just one voted, and that's just one, potentially biased opinion. The score isn't significant.

I don't see anything particularly wrong with your post. Its sustaining ideas seems similar to the Fermi paradox, and the berserker hypothesis. From which you derive that a great filter lies ahead of us, right?

Thank you so much for the reply! Simply tracing down the 'berserker hypothesis' and 'great filter' puts me in touch with thinking on this subject that I was not aware of.

What I thought might be novel about what I wrote included the idea that independent evolution of traits was evidence that life should progress to intelligence a great deal of the time.

When we look at the "great filter" possibilities, I am surprised that so many people think that our society's self-destruction is such a likely candidate. Intuitively, if there are thousands of societies, one would expect a high variability in social and political structures and outcomes. The next idea I read, that "no rational civilization would launch von Neuman probes" seems extremely unlikely because of that same variability. Where there would be far less variability is mundane constraints of energy and engineering to launch self-replicating spacecraft in a robust fashion. Problems there could easily stop every single one of our thousand candidate civilizations cold, with no variability.

Yes, the current speculations in this field are of wildly varying quality. The argument about convergent evolution is sound.

Minor quibble about convergent evolution which doesn't change the conclusion much about there being other intelligent systems out there.

All organisms on Earth share some common points (though there might be shadow biospheres), like similar environmental conditions (a rocky planet with a moon, a certain span of temperatures, etc.), a certain biochemical basis (proteins, nucleic acids, water as a solvent, etc.). I'd distinguish convergent evolution within the same system of life on the one hand, and convergent evolution in different systems of life on the other. We have observed the first, and they both likely overlap, but some traits may not be as universal as we'd be lead to think.

For instance, eyes may be pretty useful here, but deep in the oceans of a world like Europa, provided life is possible there, they might not (an instance of the environment conditioning what is likely to evolve).

I should add that I know this is probably wrong in some respects, and I'm very interested in learning what they are.