You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on The Fermi paradox as evidence against the likelyhood of unfriendly AI - Less Wrong Discussion

5 Post author: chaosmage 01 August 2013 06:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 01 August 2013 11:51:50PM 7 points [-]

Even if somehow being a good person meant you could only go at 0.99999c instead of 0.999999c, the difference from our perspective as to what the night sky should look like is negligible. Details of the utility function should not affect the achievable engineering velocity of a self-replicating intelligent probe.

The Fermi Paradox is a hard problem. This does not mean your suggestion is the only idea anyone will ever think of for resolving it and hence that it must be right even if it appears to have grave difficulties. It means we either haven't thought of the right idea yet, or that what appear to be difficulties in some existing idea have a resolution we haven't thought of yet.

Comment author: Luke_A_Somers 02 August 2013 01:58:49PM 3 points [-]

Even if somehow being a good person meant you could only go at 0.99999c instead of 0.999999c

Even if somehow being a good person meant you could only go at 0.01c instead of 0.999999c...

Comment author: Kawoomba 02 August 2013 06:44:39AM *  3 points [-]

The Fermi Paradox is a hard problem.

What's your favored hypothesis? Are we the first civilization to have come even this far (filter constrains transitions at some earlier stage, maybe abiogenesis), at least in our "little light corner"? Did others reach this stage but then perish due to x-risks excluding AI (local variants of grey goo, or resource depletion etc.)? Do they hide from us, presenting us a false image of the heavens, like a planetarium? Are the nanobots already on their way, still just a bit out? (Once we send our own wave, I wonder what would happen when those two waves clash.) Are we simulated (and the simulators aren't interested in interactions with other simulated civilizations)?

Personally, the last hypothesis seems to be the most natural fit. Being the first kids on the block is also not easily dismissible, the universe is still ridiculously young vis-a-vis e.g. how long our very own sol has already been around (13.8 vs. 4.6 billion years), compared to what one might expect.

Comment author: Eliezer_Yudkowsky 02 August 2013 07:20:45AM 9 points [-]

The only really simple explanation is that life (abiogenesis) is somehow much harder than it looks, or there's a hard step on the way to mice. Grey goo would not wipe out every single species in a crowded sky, some would be smarter and better-coordinated than that. The untouched sky burning away its negentropy is not what a good mind would do, nor an evil mind either, and the only simple story is that it is empty of life.

Though with all those planets, it might well be a complex story. I just haven't heard any complex stories that sound obviously right or even really actually plausible.

Comment author: ciphergoth 02 August 2013 10:19:40AM 1 point [-]

How hard do you think abiogenesis looks? However much larger than our light-pocket the Universe is, counting many worlds, that's the width of the range of difficulty it has to be in to account for the Fermi paradox. AIUI that's a very wide, possibly infinite range, and it doesn't seem at all implausible to me that it's in that range. You have a model which would be slightly surprised by finding it that unlikely?

Comment author: Yosarian2 11 August 2013 09:24:58PM 0 points [-]

There doesn't actually have to be one great filter. If there are 40 "little filters" between abiogenesis and "a space-faring intelligence spreading throughout the galaxy", and at each stage life has a 50% chance of moving past the little filter, then the odds of any one potentially life-supporting planet getting through all 40 filters is only 1 in 2^40, or about one in a trillion, and we probably wouldn't see any others in our galaxy. Perhaps half of all self-replicating RNA gets to the DNA stage, half of the time that gets up to the prokaryote stage, half of the time that gets to the eukaryote stage, and so on, all the way up through things like "intelligent life form comes up with the idea of science" or "intelligent life form passes through an industrial revolution". None of the steps have to be all that improbable in an absolute sense, if there are enough of them.

The "little filters" wouldn't necessarily have to be as devastating as we usually think of in terms of great filters; anything that could knock either evolution or a civilization back so that it had to repeat a couple of other "little filters" would usually be enough. For example, "a civilization getting through it's first 50 years after the invention of the bomb without a nuclear war" could be a little filter, because even though it might not cause the extinction of the species, it might require a civilization to pass through some other little filters again to get back to that level of technology again, and some percentage might never do that. Same with asteroid strikes, drastic ice ages, ect; anything that sets the clock back on evolution for a while.

Comment author: chaosmage 02 October 2013 11:34:01PM 1 point [-]

If that was true, we'd expect to find microbial life on a nontrivial number of planets. That'll be testable in a few years.

Comment author: Luke_A_Somers 02 August 2013 01:57:21PM 1 point [-]

(Once we send our own wave, I wonder what would happen when those two waves clash)

Given the vastness of space, they would pass through each other and each compete with the others on a system-by-system basis. Those who got a foothold first would have a strong advantage.

Comment author: Kawoomba 02 August 2013 06:23:32PM 0 points [-]

Blob wars! Twist: the blobs are sentient!

What gobbledegook. Or is it goobly goop? The bloobs versus the goops?

Comment author: chaosmage 02 August 2013 12:21:13AM 0 points [-]

I'm not trying to resolve the Fermi problem. I'm pointing out alien UFAIs should be more visible than alien FAIs, and therefore their apparent absence is more remarkable.

Comment author: Eliezer_Yudkowsky 02 August 2013 01:34:48AM 1 point [-]

We understand you are saying that. Nobody except you believes it, for the good reasons given in many responses.

Comment author: RobbBB 05 August 2013 11:33:48PM *  3 points [-]

Since we're talking about alien value systems in the first place, we shouldn't talk as though any of these is 'Friendly' from our perspective. The question seems to be whether a random naturally selected value set is more or less likely than a random artificial unevolved value set to reshape large portions of galaxies. Per the Convergence Of Instrumental Goals thesis, we should expect almost any optimizing superintelligence to be hungry enough to eat as much as it can. So the question is whether the rare exceptions to this rule are disproportionately on the naturally selected side.

That seems plausible to me. Random artificial intelligences are only constrained by the physical complexity of their source code, whereas evolvable values have a better-than-chance probability of having terminal values like Exercise Restraint and Don't Eat All The Resources and Respect Others' Territory. If a monkey coding random utility functions on a typewriter is less likely than evolution to hit on something that intrinsically values Don't Fuck With Very Much Of The Universe, then friendly-to-evolved-alien-values AI is more likely than unfriendly-to-evolved-alien-values AI to yield a Fermi Paradox.

Comment author: Eliezer_Yudkowsky 05 August 2013 11:38:40PM 2 points [-]

Agreed, but if both eat galaxies with very high probability, it's still a bit of a lousy explanation. Like, if it were the only explanation we'd have to go with that update, but it's more likely we're confused.

Comment author: RobbBB 05 August 2013 11:41:26PM *  0 points [-]

Agreed. The Fermi Paradox increases the odds that AIs can be programmed to satisfy naturally selected values, a little bit. But this hypothesis, that FAI is easy relative to UFAI, does almost nothing to explain the Paradox.

Comment author: DSherron 02 August 2013 11:11:44PM -1 points [-]

They should be very, very slightly less visible (they will have slightly fewer resources to use due to expending some on keeping their parent species happy, and FAI is more likely to have a utility function that intentionally keeps itself invisible to intelligent life than UFAI, even though that probability is still very small), but this difference is negligible. Their apparent absence is not significantly more remarkable, in comparison to the total remarkability of the absence of any form of highly intelligent extra-terrestrial life.