Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: kilobug 30 August 2014 07:13:25AM 1 point [-]

I'm still highly skeptical of the existence of the "Great Filter". It's one possible explanation to the "why don't we see any hint of existence of someone else" but not the only one.

The most likely explanation to me is that intelligent life is just so damn rare. Life is probably frequent enough - we know there are a lot of exoplanets, many have the conditions for life, and life seems relatively simple. But intelligent life ? It seems to me it required a great deal of luck to exist on Earth, and it does seem somewhat likely that it's rare enough so we are alone not only in the galaxy, but in a large sphere around us. The universe is so vast there probably is intelligent life elsewhere, but if we admit AI can colonize at 10% of c, and the closest is 100 million light years away and exists since only 1 billion of years, it didn't reach us yet.

This whole "we compute how likely intelligent life is using numbers coming from nowhere, we don't detect any intelligence, so we conclude there is a Great Filter" seems very fishy reasoning to me. Not detecting any intelligence should make us, first of all, revise down the probability of the hypothesis "intelligence life is frequent enough", before making us create new "epicycles" by postulating a Great Filter.

A few elements making it unlikely for intelligent life to exist frequently, and that's just a few :

  • life, especially technological civilization, requires lots of heavy elements, which didn't exist too early in the universe, meaning only stars about the same generation as the Sun have chance to have it ;

  • it took 5 billions of years after the planet existed to evolve on Earth, on the 6 billions it has before the Sun becomes too hot and vaporizes water on it ;

  • the dinosaur phase shows that it was easy for evolution to reach some local minima that didn't include intelligence, and it took a great deal of luck to have a cataclysm powerful enough to throw it out of the local minima, without doing too much damage and killing all complex life ;

  • the Sun is lucky to be in a mostly isolated region, where very few nearby supernova blast life on Earth, I don't think intelligent life could develop on a star too close to the galatic center, any single nova to close to it, and all complex life on Earth would be wiped out ;

  • the Moon, which is unusual, played, it seems, a major role in allowing intelligent life to appear, from stabilizing Earth movement (and therefore climate) to easing the transition from sea to land through tides.

Comment author: Stuart_Armstrong 30 August 2014 08:28:36AM 2 points [-]

intelligent life is just so damn rare.

That's an early filter.

Comment author: Gunnar_Zarncke 30 August 2014 12:52:24AM *  0 points [-]

Once AI is developed, it could "easily" colonise the universe.

I also see this claimed often but my best guess also is that this might likely be the hard part. Getting into space is already hard. Fusion could be technologically impossible (or not energy positive).

Comment author: Stuart_Armstrong 30 August 2014 08:26:15AM 0 points [-]

Fission is sufficient.

Comment author: Stuart_Armstrong 30 August 2014 08:24:29AM 2 points [-]

Interesting points, but:

implicit machinery of the market

I don't see how you can expect that there's an implicit market mechanism correcting what's pretty much a market failure.

Comment author: Gunnar_Zarncke 30 August 2014 01:06:55AM 1 point [-]

I will again try for a poll.

Where do you think the great filter most likely lies:


Comment author: Stuart_Armstrong 30 August 2014 07:47:39AM 0 points [-]

What about central nervous systems?

Comment author: CellBioGuy 29 August 2014 08:04:22PM *  2 points [-]

Once AI is developed, it could "easily" colonise the universe.

I dispute this assumption. I think it is vanishingly unlikely for anything self-replicating (biological, technological, or otherwise) to survive trips from one island-of-clement-conditions (~ 'star system') to another.

Comment author: Stuart_Armstrong 29 August 2014 08:57:17PM 1 point [-]

http://lesswrong.com/lw/hll/to_reduce_astronomical_waste_take_your_time_then/ : six hours of the sun's energy for every galaxy we could ever reach, at a redundancy of 40. Give a million years, we can blast a million probes per star at least. Some will get through.

Comment author: V_V 29 August 2014 07:57:37PM 1 point [-]

The Fermi paradox implies that something very unlikely is indeed occurring.

Or space colonization is just hard.

Comment author: Stuart_Armstrong 29 August 2014 08:55:20PM 4 points [-]

The evidence seems to be that it's "easy" (see http://lesswrong.com/lw/hll/to_reduce_astronomical_waste_take_your_time_then/ ), at least over the thousand-million year range.

Comment author: AlexMennen 29 August 2014 08:42:36PM 2 points [-]

I suspect you are correct that the great filter does not lie between urbilatiran and dophin intelligence, but I did think of one possible hole in the argument (that I don't think is likely to end up mattering). It is possible that instead of it being easy in general for something like an urbilatiran to evolve significant intelligence, it might only be easy on places like Earth. That is, while there exist environmental conditions under which you would expect an urbilatiran-level organism to easily evolve to dolphin level in several independent instances, such conditions are very rare, and on most planets where urbilatiran-level organisms evolve, they don't advance much further.

Comment author: Stuart_Armstrong 29 August 2014 08:53:31PM 1 point [-]

So far we haven't seen any evidence the Earth is particularly rare, I think.

The Great Filter is early, or AI is hard

10 Stuart_Armstrong 29 August 2014 04:17PM

Attempt at the briefest content-full Less Wrong post:

Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.

The Octopus, the Dolphin and Us: a Great Filter tale

10 Stuart_Armstrong 29 August 2014 04:05PM

Is intelligence hard to evolve? Well, we're intelligent, so it must be easy... except that only an intelligent species would be able to ask that question, so we run straight into the problem of anthropics. Any being that asked that question would have to be intelligent, so this can't tell us anything about its difficulty (a similar mistake would be to ask "is most of the universe hospitable to life?", and then looking around and noting that everything seems pretty hospitable at first glance...).

Instead, one could point at the great apes, note their high intelligence, see that intelligence arises separately, and hence that it can't be too hard to evolve.

One could do that... but one would be wrong. The key test is not whether intelligence can arise separately, but whether it can arise independently. Chimpanzees, Bonobos and Gorillas and such are all "on our line": they are close to common ancestors of ours, which we would expect to be intelligent because we are intelligent. Intelligent species tend to have intelligent relatives. So they don't provide any extra information about the ease or difficulty of evolving intelligence.

To get independent intelligence, we need to go far from our line. Enter the smart and cute icon on many student posters: the dolphin.

continue reading »
Comment author: Agathodaimon 29 August 2014 05:08:41AM 0 points [-]

How would you measure aptitude gain?

Comment author: Stuart_Armstrong 29 August 2014 01:02:08PM 0 points [-]

There are suggestions, such as using some computable version of the measure AIXI is maximising. Kaj Sotala has a review of methods, unpublished currently I believe.

View more: Next