Edit after two weeks: Thanks to everyone involved in this very interesting discussion! I now accept that any possible differences in how UFAI and FAI might spread over the universe pale before the Fermi paradox's evidence against the pre-existence of any of them. I enjoyed thinking about this a lot, so thanks again for considering my original argument, which follows below...

The assumptions that intelligence is substrate-independent, as well as that intelligent systems will always attempt to become more intelligent lead to the conclusion that, in the words of Paul Davies, "if we ever encounter extraterrestrial intelligence, it is overwhelmingly likely to be post-biological".

At Less Wrong, we have this notion of unfriendly artificial intelligence - AIs that use their superior intelligence to grow themselves and maximize their own utility at the expense of humans much like we maximize our own utility at the expense of mosquitoes. Friendly AI, on the other hand, should have a positive effect on humanity. Details are beyond my comprehension or indeed rather vague, but presumably such an AI would prioritize particular elements of its home biosphere over its own interests as an agent that - aware of its own intelligence and the fact that is what helps it maximize its utility - should want to grow smarter. The distinction should make as much sense on any alien planet as it does on our own.

We know that self-replicating probes, travelling at, say, 1% of the speed of light, could colonize the entire galaxy in millions, not billions, of years. Obviously, an intelligence looking only to grow itself (and maximize paperclips or whatever) can do this much more easily than one restrained by its biological-or-similar parents. Between two alien superintelligences, one strictly self-maximizing should out-compete one that cares about things like the habitability of planets (especially its home planet) by the standards of its parents. It follows that if we ever encounter post-biological extraterrestrial intelligence, it should be expected (at least by the Less Wrong community) to be hostile.

But we havent. What does that tell us?

Our astronomical observations increasingly allow us to rule out some possible pictures of life in the rest of the galaxy. This means we can also rule out some possible explanations for the Fermi paradox. For example, until a few years ago, we didn't know how common it was for stars to have solar systems. This created the possibility that Earth was rare because it was inside a rare solar system. Or that, as imagined in the Charles Stross novel Accelerando, a lot of planetary systems are already Matrioshka brains (which we're speculating are the optimal substrate for a self-replicating intelligent system capable of advanced nanotechnology and interstellar travel). Now we know planetary systems, and planets, are apparently quite common. So we can rule out that Matrioshka brains are the norm.

Therefore, it very much seems like no self-replicating unfriendly artificial intelligence has arisen anywhere in the galaxy in the - very roughly - 10 billion years since intelligent life could have arisen somewhere in the galaxy. If there had, our own solar system would have been converted into its hardware already. There still could be intelligences out there ethical enough to not bother solar systems with life in them - but then they wouldn't be unfriendly, right?

I see two possible conclusions from this. Either intelligence is incredibly rare and we're indeed the only ones in the galaxy where unfriendly artificial intelligence is a real threat. Or intelligence is not so rare, has arisen elsewhere, but never, not even in one case, has evolved into the paperclip-maximizing behemoth that we're trying to defend ourselves from. Both possibilities reinforce the need for AI (and astronomical) research.

Thoughts?

New Comment
54 comments, sorted by Click to highlight new comments since:

This argues equally against FAI as UFAI. Both are equally capable of expanding at near-lightspeed.

I disagree. Compared to UFAIs, FAIs must by definition have a more limited range of options. Why would the difference be negligible?

If life was usually eventually going to enter an intellligence explosion, as well as not super rare, near-lightspeed expansion could have happened many times. And since we're still here, we know none ever did - or at least none by UFAIs.

So intelligence arising somewhere is either extremely rare (we're first) or not likely to develop an intelligence explosion or not likely to become a UFAI if they do. If none of the three were true, we wouldn't be here.

Even if somehow being a good person meant you could only go at 0.99999c instead of 0.999999c, the difference from our perspective as to what the night sky should look like is negligible. Details of the utility function should not affect the achievable engineering velocity of a self-replicating intelligent probe.

The Fermi Paradox is a hard problem. This does not mean your suggestion is the only idea anyone will ever think of for resolving it and hence that it must be right even if it appears to have grave difficulties. It means we either haven't thought of the right idea yet, or that what appear to be difficulties in some existing idea have a resolution we haven't thought of yet.

Even if somehow being a good person meant you could only go at 0.99999c instead of 0.999999c

Even if somehow being a good person meant you could only go at 0.01c instead of 0.999999c...

The Fermi Paradox is a hard problem.

What's your favored hypothesis? Are we the first civilization to have come even this far (filter constrains transitions at some earlier stage, maybe abiogenesis), at least in our "little light corner"? Did others reach this stage but then perish due to x-risks excluding AI (local variants of grey goo, or resource depletion etc.)? Do they hide from us, presenting us a false image of the heavens, like a planetarium? Are the nanobots already on their way, still just a bit out? (Once we send our own wave, I wonder what would happen when those two waves clash.) Are we simulated (and the simulators aren't interested in interactions with other simulated civilizations)?

Personally, the last hypothesis seems to be the most natural fit. Being the first kids on the block is also not easily dismissible, the universe is still ridiculously young vis-a-vis e.g. how long our very own sol has already been around (13.8 vs. 4.6 billion years), compared to what one might expect.

The only really simple explanation is that life (abiogenesis) is somehow much harder than it looks, or there's a hard step on the way to mice. Grey goo would not wipe out every single species in a crowded sky, some would be smarter and better-coordinated than that. The untouched sky burning away its negentropy is not what a good mind would do, nor an evil mind either, and the only simple story is that it is empty of life.

Though with all those planets, it might well be a complex story. I just haven't heard any complex stories that sound obviously right or even really actually plausible.

How hard do you think abiogenesis looks? However much larger than our light-pocket the Universe is, counting many worlds, that's the width of the range of difficulty it has to be in to account for the Fermi paradox. AIUI that's a very wide, possibly infinite range, and it doesn't seem at all implausible to me that it's in that range. You have a model which would be slightly surprised by finding it that unlikely?

There doesn't actually have to be one great filter. If there are 40 "little filters" between abiogenesis and "a space-faring intelligence spreading throughout the galaxy", and at each stage life has a 50% chance of moving past the little filter, then the odds of any one potentially life-supporting planet getting through all 40 filters is only 1 in 2^40, or about one in a trillion, and we probably wouldn't see any others in our galaxy. Perhaps half of all self-replicating RNA gets to the DNA stage, half of the time that gets up to the prokaryote stage, half of the time that gets to the eukaryote stage, and so on, all the way up through things like "intelligent life form comes up with the idea of science" or "intelligent life form passes through an industrial revolution". None of the steps have to be all that improbable in an absolute sense, if there are enough of them.

The "little filters" wouldn't necessarily have to be as devastating as we usually think of in terms of great filters; anything that could knock either evolution or a civilization back so that it had to repeat a couple of other "little filters" would usually be enough. For example, "a civilization getting through it's first 50 years after the invention of the bomb without a nuclear war" could be a little filter, because even though it might not cause the extinction of the species, it might require a civilization to pass through some other little filters again to get back to that level of technology again, and some percentage might never do that. Same with asteroid strikes, drastic ice ages, ect; anything that sets the clock back on evolution for a while.

If that was true, we'd expect to find microbial life on a nontrivial number of planets. That'll be testable in a few years.

(Once we send our own wave, I wonder what would happen when those two waves clash)

Given the vastness of space, they would pass through each other and each compete with the others on a system-by-system basis. Those who got a foothold first would have a strong advantage.

Blob wars! Twist: the blobs are sentient!

What gobbledegook. Or is it goobly goop? The bloobs versus the goops?

I'm not trying to resolve the Fermi problem. I'm pointing out alien UFAIs should be more visible than alien FAIs, and therefore their apparent absence is more remarkable.

We understand you are saying that. Nobody except you believes it, for the good reasons given in many responses.

Since we're talking about alien value systems in the first place, we shouldn't talk as though any of these is 'Friendly' from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.

That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don't Eat All The Resources and Respect Others' Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don't Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.

Agreed, but if both eat galaxies with very high probability, it's still a bit of a lousy explanation. Like, if it were the only explanation we'd have to go with that update, but it's more likely we're confused.

Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.

They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.

I disagree. Compared to UFAIs, FAIs must by definition have a more limited range of options. Why would the difference be negligible?

Even if that were true (which I don't see: like FAIs, uFAIs will have goals they are trying to maximize, and their options will be limited to those not in conflict with those goals): Why on Earth would this difference take the form of "given millions of years, you can't colonize the galaxy"? And moreover, why would it reliably have taken this form for every single civilization that has arisen in the past? We'd certainly expect an FAI built by humanity to go to the stars!

I'm not saying that it can't, I'm saying it surely would. I just think it is much easier, and therefore much more probable, for a simple self-replicating cancer-like self-maximizer to claim many resources, than for an AI with continued pre-superintelligent interference.

Overall, I believe it is more likely we're indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.

Overall, I believe it is more likely we're indeed alone, because most of the places in that vast space of possible mind architecture that Eliezer wrote about would eventually have to lead to galaxywide expansion.

This seems like a perfectly reasonable claim. But the claim that the Fermi paradox argues more strongly against the existence of nearby UFAIs than FAIs doesn't seem well-supported. If there are nearby FAIs you have the problem of theodicy.

I should note that I'm not sure what you mean about the pre-superintelligent interference part though, so I may be missing something.

FAIs may be more limited, but I suspect not substantially so. An FAI is going to want to expand as hard and fast as it can, if only to be able to subsume any UFAIs it encounters as part of its expansion.

The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn't seem very helpful.

Obviously, an intelligence looking only to grow itself (and maximize paperclips or whatever) can do this much more easily than one restrained by its biological-or-similar parents.

I disagree. See this post, and Armstrong and Sandberg's analysis.

The Fermi paradox provides some evidence against long-lived civilization of any kind, hostile or non-hostile. Entangling the Fermi paradox with questions about the character of future civilization (such as AI risk) doesn't seem very helpful.

To put this point slightly differently, the Fermi paradox isn't strong evidence for any of the following over the others: (a) Humanity will create Friendly AI; (b) humanity create Unfriendly AI; (c) humanity will not be able to produce any sort of FOOMing AI, but will develop into a future civilization capable of colonizing the stars. This is because all of these, if the analog had in the past happened on an alien planet sufficiently close to us (e.g. in our galaxy), we would see the difference: to the degree that the Fermi paradox provides evidence about (a), (b) and (c), it provides about the same amount of evidence against each. (It does provide evidence against each, since one possible explanation for the Fermi paradox is a Great Filter that's still ahead of us.)

Brilliant links, thank you!

An FAI will always have more rules to follow ("do not eat the ones with life on them") and I just don't see how these would have advantages over a UFAI without those restrictions.

Among the six possibilities at the end of Armstrong and Sandberg's analysis, the "dominant old species" scenario is what I mean - if there is one, it isn't a UFAI.

A UFAI would well have more rules to follow, but these rules will not be as well chosen. It's not clear that these rules will become negligible.

[-][anonymous]-20

An FAI will always have more rules to follow ("do not eat the ones with life on them") and I just don't see how these would have advantages over a UFAI without those restrictions.

They mostly don't have life on them, even in the Solar System, intergalactic travel involves more or less "straight shots" without stopovers (nowhere to stop), and the slowdown is negligibly small.

[This comment is no longer endorsed by its author]Reply
  • Aliens won't produce a FAI, their successful AI project would have alien values, not ours (complexity of value). It would probably eat us. I suspect even our own humane FAI would eat us, at the very least get rid of the ridiculously resource-hungry robot-body substrate. The opportunity cost of just leaving dumb matter around seems too enormous to compete with whatever arguments there might be for not touching things, under most preferences except those specifically contrived to do so.
  • UFAI and FAI are probably about the same kind of thing for the purposes of powerful optimization (after initial steps towards reflective equilibrium normalize away flaws of the initial design, especially for "scruffy" AGI). FAI is just an AGI that happens to be designed to hold our values in particular. UFAI is not characterized by having "simple" values (if that characterization makes any sense in this context, it's not clear in what way should optimization care about the absolute difficulty of the problem, as compared to the relative merits of alternative plans). It might even turn out to be likely for a poorly-designed AGI to have arbitrarily complicated "random noise" values. (It might also turn out to be relatively simple to make an AI with values so opaque that it would need to turn the whole universe into an only instrumentally valuable computer in order to obtain a tiny chance of figuring out where to move a single atom, the only action it ever does for terminal reasons. Make it solve a puzzle of high computational complexity or something.)
  • There doesn't appear to be a reason to expect values to influence the speed of expansion to any significant extent, it's astronomical waste (giving an instrumental drive) for almost all values, important to start optimizing the matter as soon as possible.

Relating to your first point, I've read several stories that talk about that in reverse. AIs (F or UF is debatable for this kind) that expand out into the universe and completely ignore aliens, destroying them for resources. That seems like a problem that's solvable with a wider definition of the sort of stuff it's supposed to be Friendly to, and I'd hope aliens would think of that, but it's certainly possible.

(Terminological nitpick: You can't usually solve problems by using different definitions.)

sort of stuff it's supposed to be Friendly to

Goals are not up for grabs. FAI follows your goals. If you change something, the result is different from your goals, with consequences that are worse according to your goals. So you shouldn't decide on object level what's "Friendly". See also Complex Value Systems are Required to Realize Valuable Futures.

A problem with Unfriendly and Friendly are that they are quite relative terms. An AI which is Friendly to the baby eaters will be unfriendly to us. This is a subset of the more general truth that an I (natural or artificial) which is friendly to the baby eaters will be unfriendly to us. Indeed as an I, humans have been unfriendly to Chimps, Bonobos, Dodos, and Neanderthals to name some of the more interesting examples.

Yes the Fermi paradox is evidence against UAI. It is also evidence against FAI. In fact, as the pardox in its original form is stated, it is evidence against I other than human, no matter what letters you decorate the I with.

Maybe interstellar travel is really, really hard--no matter what your level of technology.

Maybe 99% of the habitable planets in the galaxy have been sterilized by unfriendly AI and we owe our current existence to the anthropomorphic principle.

Maybe highly rational entities decide large-scale interstellar travel is suboptimal.

Probably a bunch more possibilities here...

Maybe 99% of the habitable planets in the galaxy have been sterilized by unfriendly AI and we owe our current existence to the anthropomorphic principle.

Anthropic.

DEATH) owes his existence to the anthropomorphic principle.

Maybe highly rational entities decide large-scale interstellar travel is suboptimal.

I think this is an under considered explaination. Once you get more than 10 light years away ( and maybe much sooner than that) coordination is hard. You can't send messages back and forth quickly, you can't synchronize views and conclusions quickly etc.

Maybe the Agi or civilization or whatnot thinks of starting a colony, realizes that after it does so there will be 2 civilizations or 2 AIs and decides that the increase in the odds of conflict after going from 1 to 2 don't justify the benefits.

Maybe 99% of the habitable planets in the galaxy have been sterilized by unfriendly AI and we owe our current existence to the anthropomorphic principle.

Wouldn't we see evidence of this?

Or maybe the Planck mass changes over time, as suggested by Chrisof Wetterich recently and only recently reached a level where life or intelligence suddenly became possible everywhere in the galaxy?

That cannot presently be known, although it's fun to speculate. But I think it helps us more if we can eliminate real-seeming possibilities.

[-]Shmi60

Or maybe the Planck mass changes over time, as suggested by Chrisof Wetterich recently and only recently reached a level where life or intelligence suddenly became possible everywhere in the galaxy?

I don't think this model would help, since it's designed to explain away the Big Bang and is claimed to be compatible with all present observations (and so is equivalent to the standard Lambda-CDM cosmology at the late times).

[-]tim20

Why are you assuming that we would be more likely to notice an unfriendly SI than a friendly SI? If anything, it seems that an intelligence we would consider friendly is more likely to cause us to observe life than one maximizing something completely orthogonal to our values.

(I don't buy the argument that an unfriendly SI would propagate throughout the universe to a greater degree than a friendly SI. Fully maximizing happiness/consciousness/etc also requires colonizing the galaxy.)

Regardless of what it optimizes, it needs raw materials at least for its own self-improvement, and it can see them lying around everywhere.

We haven't noticed anyone, friendly or unfriendly. We don't know if any friendly ones noticed us, but we know that no unfriendly ones did.

Therefore, it very much seems like no self-replicating unfriendly artificial intelligence has arisen anywhere in the galaxy in the - very roughly - 10 billion years since intelligent life could have arisen somewhere in the galaxy.

I think a more correct formulation would be:

It very much seems that nothing that both (a) desires limitless expansion and (b) is capable of limitless expansion through interstellar distances has arisen anywhere in our galaxy.

I am not quite sure that the application of terminology like AI (and, in particular, FAI or UFAI) to alien entities is meaningful.

Any new intelligence would have to arise from (something like) natural selection, as a useful trick to have in the competition for resources that everything from bacteria upwards is evolved to be good at. I fail to imagine any intelligent lifeform that wouldn't want to expand.

Even though the product of natural selection can be assumed to be 'fit' with regards to its environment, there's no reason to assume that it will consciously embody the values of natural selection. Consider: birth control.

In particular, expansion may be a good strategy for a species but not necessarily a good strategy for individuals of that species.

Consider: a predator (say, a bird of prey or a big cat) has no innate desire for expansion. All the animal wants is some predetermined territory for itself, and it will never enlarge this territory because the territory provides all that it needs and policing a larger area would be a waste of effort. Expansion, in many species, is merely a group phenomenon. If the species is allowed to grow unchecked (fewer predators, larger food supply), they will expand simply by virtue of there being more individuals than there were before.

A similar situation can arise with a SAI. Let's say a SAI emerges victorious from competition with other SAIs and its progenitor species. To eliminate competition it ruthlessly expands over its home planet and crushes all opposition. It's entirely possible then that by conquering its little planet it has everything it needs (its utility function is maximized), and since there are no competitors around, it settles down, relaxes, and ceases expansion.

Even if the SAI were compelled to grow (by accumulating more computational resources), expansion isn't guaranteed. Let's say it figures out how to create a hypercomputer with unlimited computational capacity (using, say, a black hole). If this hypercomputer provides it with all its needs, there would be no reason to expand. Plus, communication over large distances is difficult, so expansion would actually have negative value.

It's entirely possible then that by conquering its little planet [the AGI] has everything it needs (its utility function is maximized)

I don't think it is possible. Even specifically not caring about the state of the rest of the world would make it useful for instrumental reasons, to compute more optimal actions to be performed on the original planet. The value of not caring about the rest of the world is itself unlikely to be certain, cleanly evaluating properties of even minimally nontrivial goals seems hard. Even if under its current understanding of the world, the meaning of its values is that it doesn't care about the rest of the world, it might be wrong, perhaps given some future hypothetical discovery about fundamental physics, in which case it's better to already have the rest of the world under control, ready to be optimized in a newly-discovered direction (or before that, to run those experiments).

Far too many things have to align for this to happen.

It is possible to have factors in one's utility function which limit the expansion.

For example, a utility function might involve "preservation in an untouched state", something similar to what humans do when they declare a chunk of nature to be a protected wilderness.

Or a utility function might contain "observe development and change without influencing it".

And, of course, if we're willing to assume an immutable cast-in-stone utility function, why not assume that there are some immutable constraints which go with it?

It's definitely unlikely, I just brought it up as an example because chaosmage said "I fail to imagine any intelligent lifeform that wouldn't want to expand." There are plenty of lifeforms already that don't want to expand, and I can imagine some (unlikely but not impossible) situations where a SAI wouldn't want to expand either.

Maybe there are better ways to expand than through spacetime, better ways to make yourself into this sort of maximizing agent, and we are just completely unaware of them because we are comparatively dull next to the sort of AGI that has a brain the size of a planet? Some way to beat out entropy, perhaps. That'd make it inconsistent to see any sort of sky with UFAI or FAI in it.

I can somewhat imagine what these sorts of ways would be, but I have no idea if those things are likely or even feasible, since I am not a world-devouring AGI and can only do wild mass speculation at what's beyond our current understanding of physics.

A simpler explanation could be that AGIs use stealth in pursuing their goals: the ability to camouflage oneself has always been of evolutionary import, and AGIs may find it useful to create a sky which looks like "nothing to see here" to other AGIs. (As they will likely be unfriendly toward each other) Camoflage, if good enough, would allow one to hide from predators (bigger AGI) and sneak up on prey (smaller AGI) Since we would likely be orders of magnitude worse at detecting an AGI's camoflage, we see a sky that looks like there is nothing wrong. This doesn't explain why we haven't been devoured, of course, which is the weakness of the argument.

Or maybe something like acausal trade limits the expansion of AGI. If AGIs realize that fighting over resources is likely to hinder their goals more than help them in the long wrong, they might limit their expansion on the theory that there are other AGIs out there. If I think I am 1 out of a population of a billion, and I don't want to be a target for a billion enemies at once, I might decide that taking over the entire galaxy/universe/beyond isn't worth it. In fact, if these sorts of stand-offs become more common as the scale becomes grander, it might be motivation not to pursue such scales. The problem with this being that you would expect earlier AGIs to be more likely to just take advantage before future ones can really get to the point of being near-equals and defect on this particular dilemma. (A billion planet-eating AGI are probably not a match for one galaxy-eating AGI. So if you see a way to become the galaxy-eater before enough planet-eaters can come to the party, you go for it.)

I don't find any of these satisfying, as one seems to require a sub-set of possibilities for unknown physics and the others seem to lean pretty heavily on the anthropic principle to explain why we, personally, are not dead yet. I see possibilities here, but none of them jump out at me as being exceptionally likely.

[-]Decius-20

All that we know is that we haven't already encountered an UFAI.

Probably.

Humans evolved to having sex to encourage them to have as many children as possible. Yet humans don't have as many children as they can. Utility functions of humans are complicated.

The same goes for AGI. It's likely to do things that are more complicated than paperclip maximisation, which are hard to understand.

My take on this is that civilizations overwhelmingly terminate, overwhelmingly by other means than some independent willed super AI. Which is what I'd expect anyway, because this specific AI doomsday followed by AI expansion scenario is just one of many very speculative ways how a civilization could come to an end.

With regards to "utilities" and "utility functions", one needs to carefully distinguish between a mathematical function (with some fairly abstract input domain) that may plausibly and uncontroversially describe a goal that may be implemented into an AI, and a hypothetical, speculative function which much more closely follows the everyday meaning of the word "function" (i.e. purpose) and has actual reality as it's input domain.

My take on this is that civilizations overwhelmingly terminate, overwhelmingly by other means than some independent willed super AI.

Is this counting the failure of intelligent life to develop as "termination"? You wrote elsewhere:

Let me note that the probability of the 1 kilobit of specific genetic code forming spontaneously is 2^-1024 . We don't know how much of low probability 'miracle' does life require, but it can't be very little (or we'd have abiogenesis in the lab), and intuition often fails exponents. If abiogenesis requires mere several times more lucky bits than "we didn't have it forming in the lab", there's simply no life anywhere in the observable universe, except on Earth.

The dark sky does not scare me, for I know not to take some intuitions too seriously. The abiogenesis is already mindbogglingly improbable, simply for not occurring in the vast number of molecules in a test tube in a lab, over the vast timespan of days; never in the observable universe sure feels like a lot, but it is not far off in terms of bits that are set by luck.

I did somewhat more accurate thinking since with regards to abiogenesis, and I find it considerably less plausible now that life is mindbogglingly improbable.

Plus I was never a big believer that we're going straight into a space opera.

To recap on the improbability of life (not sure if we argued that point), IMO the meaningful way to ponder such questions is to ponder generalized physics, as is, the world around you (rather than the world from god's perspective and your incarnation into a specific thinking being, I consider that to be nonsense). When a theory gives some probability for the observables, that means a -log2(probability) addition of bits to the complexity. We should try to minimize total complexity to make our best (least speculative) guess. So there's a theory with laws of physics and life as we know it requiring a plenty of bits. The bits required should be possible to express shorter via tweaks to the laws of physics, that's a general consideration for generating data via some code rather than just writing the data down (it is still more shaky than what I'd be comfortable with, though).

What I expect now is that life arises through evolution of some really weird chemical system where long pieces of information replicate via chemistry being very convenient for that particular crazy cocktail, and can catalyse replication in this complicated and diverse cocktail in many different ways (so that evolution has someplace to go, gradually taking over the replication functionality). Having that occur multiple times on a planet will not save any further description bits, too, so the necessary cocktail conditions could be quite specific, of the "happens in just a few spots on the entire planet" kind, very hard to re-create.

edit: also from what I remember my focus on the thermodynamic luck as an explanation was based upon lack of an alternative defined enough to ponder what exactly it substitutes for that luck. In the case of the above, it's still quite a lot less defined than I'd be comfortable with, but at least it substitutes something and has a plausibility argument that what it substitutes is fewer bits than what it replaces.

What if the UFAI is just hiding in darkspace for a set period of time?