If our civilization doesn’t collapse then in 50 to 1000 years humanity will almost certainly start colonizing the galaxy.  This seems inconsistent with the fact that there are a huge number of other planets in our galaxy but we have not yet found evidence of extraterrestrial life.  Drawing from this paradox, Robin Hanson writes that there should be some great filter “between death and expanding lasting life, and humanity faces the ominous question: how far along this filter are we?”

Katja Grace reasons that this filter probably lies in our future and so we are likely doomed.   (Hanson agrees with Grace’s argument.)  Please read Grace’s post before reading the rest of this post.

 

Small groups of humans have been in situations similar to that currently faced by our entire species.  To see this imagine you live in a prehistoric hunter gatherer tribe of 200 people.  Your tribe lives on a large, mostly uninhabited island.  Your grandparents, along with a few other people came over to the island about 30 years ago.  Since arriving on the island no one in your tribe has encountered any other humans.   You figure that if your civilization doesn’t get wiped out then in 100 or so years your people will multiply in number and spread throughout the island.  If your civilization does so spread, then any new immigrants to the island would quickly encounter your tribe.

Why, you wonder, have you not seen other groups of humans on your island?  You figure that it’s either because your tribe was the first to arrive on the island, or because your island is prone to extinction disasters that periodically wipes out all its human inhabitance.  Using anthropic reasoning similar to that used by Katja Grace you postulate the existence of four types of islands:

1)  Islands that are easy to reach and are not prone to disaster.
2)  Islands that are hard to reach and are not prone to disaster.
3)  Islands that are easy to reach and are prone to disaster.
4)  Islands that are hard to reach and are prone to disaster.

You conclude that your island almost certainly isn’t type (1) because if it were you would have almost certainly seen other humans on the island.   You also figure that on type (3) islands lots of civilizations will start and then be destroyed.  Consequently, most of the civilizations that find themselves on types (2), (3) or (4) islands are in fact on type (3) islands.  Your tribe, you therefore reason, will probably be wiped out by a disaster.

I believe that the argument for why you are probably on a type (3) island is analogous to that of why the great filter probably lies in our future because both come about from updating “on your own existence by weighting possible worlds as more likely the more observers they contain.”

If, therefore, Grace’s anthropic reasoning is correct then most of the time when a prehistoric group of humans settled a large, uninhabited island, that group went extinct.

 

New Comment
42 comments, sorted by Click to highlight new comments since: Today at 10:12 AM

What settlers have really seen also depends on whether your prior is correct in our world of real islands (implicitly 1/4 to each possibility). But you can easily model what people would see in a world with that distribution of islands:

e.g. Let an easy to get to island have 100 visitors and a hard to get to one 10 visitors. Let visitors to disaster prone islands die before the next people arrive. Let there be ten of each type of island

Total observers:

From each island type 1: 1 arrives to an uninhabited island, 99 to an inhabited one

Island type 2: 1 arrives to an uninhabited island, 9 to an inhabited one

Island type 3: 100 arrive to uninhabited island

Island type 4: 10 arrive to an uninhabited island

Of the 1120 observers arriving at uninhabited islands (ten of each of the above islands), 1000 will be on type 3 islands, consistent with SIA. In this case the alternative anthropic principle preferred by others, the self sampling assumption (SSA), can come to the same conclusion because the possible islands you are considering are actual. If real islanders have seen something other than this, it is because there were different frequencies of islands.

What settlers have really seen also depends on whether your prior is correct in our world of real islands (implicitly 1/4 to each possibility).

The prior for reaching a hard-to-reach island should be lower than the prior for an easy-to-reach island. The prior for a hard-to-reach island existing may be the same as the prior for an easy-to-reach island existing. So you have to state your terms clearly before you can state your priors.

(Maybe the prior for an existing island being hard-to-reach is higher, since "easy-to-reach" should mean "easier than considerably more than half of all islands.").

That's not how I understood it. These are just types of islands that are supposed to exist in some hypothetical world. Arbitrarily, we say there are 10 islands of each type. That's all that 1/4 means here.

Small correction: The ten islands of type 1 have a total of 10 immigrants encountering an uninhabited island, and 90 an inhabited one. This doesn't substantially change your conclusion (121 observe uninhabited islands, 100 of which are on type 3).

the OP's last sentence suggests a way to test the hypothesis that islands prone to disaster are just as likely to be encountered as islands not prone to disaster. If the hypothesis is true, then island colonization attempts should have failed 83% of the time.

Of course, such a finding would imply the attempts succeeded 17% of the time. A similar conclusion can be made about galaxy colonization - a small percentage of these attempts will be successful - i.e., the galaxy will be colonized.

fixed I think

It may be worth noting that there were several failed European colonisations of the North American land mass.

IIRC Hanson favours panspermia over abiogenesis. Has he reconciled this with his great filter theory?

I don't think this analogy maps very well.

One difference is the time frame involved. The Fermi Paradox remains very robust because we can view galaxies from Andromeda, which is only a few million light-years away, to the Hubble Ultra Deep Field, which has galaxies billions of light-years away (and therefore viewed billions of years in the past). We have an entire spectrum of time presented to us thanks to varying distances, and we see no evidence of intelligent life anywhere we look.

And near-light travel could settle whole portions of the sky in mere eyeblinks by comparison to how long galaxies exist. Sitting around on an island for 30 to 100 years doesn't compare.

Another difference is the fact that the humans already had a functioning civilization somewhere, and traveled to that island, which would be analogous to humans settling other parts of the galaxy rather than being stuck on a single planet where we still are. The islanders are already past the filter point.

The two situations aren't identical, they just have some similarities when it comes to applying anthropic reasoning. The filter in James' example can be in two possible places: getting to the island, or surviving on the island. These are analogous to the filter being during the rise of civilization and being during civilization surviving long enough to colonize the galaxy.

Where to place the filter for a proper analogy doesn't seem as clear cut to me.

Why aren't the filters during the rise of intelligent life and during intelligent life surviving long enough to settle the islands?

Why aren't the filters during the rise of intelligent life and during intelligent life surviving long enough to settle the islands?

The entire point of the analogy, as far as I can tell, is to move to a domain where our intuition works better. We don't have strong intuition about time frames and probabilities involving the rise of intelligent life. We do have intuition about tribes exploring and colonizing islands. We don't have strong intuition about how long it takes for intelligent life to reach the point where they can settle islands. We do have intuition about the likelihood of natural disasters wiping out island tribes.

It's a matter of time scales and probabilities. Robin Hanson's filter involves astronomical time scales and difficult to measure probabilities. James presents an example with human time scales and probabilities that are relatively easy to measure. The point is not to capture the physical acts (colonizing the stars), but to capture the anthropic reasoning and conclusions.

We do have intuition about tribes exploring and colonizing islands. ... We do have intuition about the likelihood of natural disasters wiping out island tribes.

I don't. I have no idea what the success rates of prehistoric humans settling large islands was.

I don't have specifics either, but I do have some intuition. I know about volcanic islands. I know about volcanic eruptions in recent history. I know about past cities destroyed by volcanoes. I have some idea about how far into the ocean tribal-level technology can take you. I have some idea as to how fast people tend to spread out and explore in general.

I prefer to leave my confidence intervals very wide (+/-100%) than to inappropriately reduce the problem to one where "our intuition works better."

I guess I don't like anthropic reasoning in general.

We're not trying to narrow our confidence intervals on a cosmic filter by equating it to an island filter. Rather, using anthropic reasoning seems sketchy, so we seek out other problems using anthropic reasoning to see if the anthropic principle holds up. It's the anthropic principle itself we're analyzing here, hence the title of the post, "An empirical test of anthropic principle."

Yes, that is exactly what I want to do.

There is mathematical appeal to the argument, but I am not sure about the premise, which is that technological progress would lead to colonizing the galaxy.

I think Vinge was on to something with his zones of thought model, in which the Powers (civilizations that Transcended) removed themselves to a special zone inaccessible to others. Perhaps a really advanced civilization removes itself from the cosmic dangers, leaving planets for Suckers. There are also easily-though-of reasons for Powers to mask their existence (e.g. they might not be interested in meeting unknown Powers with different constitution/utility function).

I'm afraid we are counting white swans as far as the premise is concerned.

Very interesting post, but you almost lost me here:

Consequently, most of the civilizations that find themselves on types (2), (3) or (4) islands are in fact on type (3) islands

I suggest a clarification to the effect of "more civilizations have existed (ephemerally) on type (3) islands than on (2) or (4) type ones, due to ease of access."

OP wrote:

I believe that the argument for why you are probably on a type (3) island is analogous to that of why the great filter probably lies in our future because both come about from updating “on your own existence by weighting possible worlds as more likely the more observers they contain.”

Katja wrote:

For instance if you were born of an experiment where the flip of a fair coin determined whether one (tails) or two (heads) people were created, and all you know is that and that you exist, SIA says heads was twice as likely as tails.

The coin example Katja does use this reasoning. The islands example doesn't have to.

Take

  1. Islands that are easy to reach and are not prone to disaster.

  2. Islands that are hard to reach and are not prone to disaster.

  3. Islands that are easy to reach and are prone to disaster.

  4. Islands that are hard to reach and are prone to disaster.

Assign prior odds of 1/3 to reaching each of the two "easy to reach" cases, and 1/6 to reaching the two "hard to reach" cases. Note I'm putting "reaching an island" in my priors, and observations about how many other people are on that island into my posteriors. You should get the same answer if you assign equal priors and mash everything into your posteriors.

Suppose "prone to disaster" means "probability 1/2 that there was a disaster on this island just before we arrived", and "not prone to disaster" means "there was never a disaster on this island".

Suppose "easy to reach" means "0 to 7 other tribes have reached this island, with each number having equal probability." Suppose "hard to reach" means "0 to 1 other tribes have reached this island, with each number having equal probability."

Assume only disasters eliminate tribes. Observation B is: No other tribes on island.

Let A denote one of the 4 possible islands. P(A|B) = P(A,B) / P(B) = P(B|A)P(A)/P(B)

We'll compute just P(B|A)P(A) for each A, as P(B) is the same for all of them.

P(1|B) ~ P(B|1)P(1) = 1/8 x 1/3 = 1/24

P(2|B) ~ P(B|2)P(2) = 1/2 x 1/6 = 1/12

P(3|B) ~ P(B|3)P(3) = [1/2 + 1/2 x 1/8] x 1/3 = 9/48 = 3/16

P(4|B) ~ P(B|4)P(4) = [1/2 x 1 + 1/2 x 1/2] x 1/6 = 3/4 x 1/6 = 1/8

This shows that island type 3 is the most likely; yet we never assumed that worlds with more people are more likely.

Seems to be a lot of buzz about Katja_Grace's most recent input on the Doomsday problem. Background: see my points in this discussion a while back, about the problem of counting observers, which applies to her filling of the boxes.

Regarding this post and Katja Grace's argument, I think James Miller's point could generalize even further, showing clearly the reductio:

"Out of all attempts at significant technological gain in a civilization, few will succeed. We're a civilization. Therefore, ours probably won't advance."

As far as I can tell, it's analogous in all relevant respects, but feel free to prove me wrong.

"Out of all attempts at significant technological gain in a civilization, few will succeed. We're a civilization. Therefore, ours probably won't advance."

You use anthropic reasoning to get the claim "few will succeed", which is a conclusion, not a premise.

"Few will succeed" is an observation, not a premise, though perhaps I should have said, "Few have been observed to succeed".

What is unjustified is the conclusion that ours will not have some success that makes up for the all the other failures, which is why I think Katja_Grace's reasoning (and its reductios) fails.

It's not an observation. It's an inference for which you need the anthropic principle. "Few have succeeded so far" is an observation. You'd need to observe the future to observe "Few will succeed".

Doesn't that leave out a very significant term in the equation--the number of attempts at significant technological gain we get?

You mean succeed at? Yes, and that problem applies just the same that Katja_Grace's use of the SIA to predict a future filter.

Perhaps it's because I couldn't find "James Miller," but the tech gain argument seems comparitively underdetermined. I did mean "how many attempts we get," as in "most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance." I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.

"most attempts will fail, but if we are allowed 3^^^3 attempts at a significant technological gain, we will expect to advance." I think you need some sort of prior distribution for number of attempts to make it analogous to SIA doomsday.

And you need a similar prior distribution for Katja_Grace's SIA argument, which makes it underdetermined as well. (My "no anthropic reasoning" paradigm is really starting to pan out!)

James Miller is the author of this top-level post.

James Miller is the author of this top-level post.

Thanks, that underscores my difficulty finding relevant details.

I think you're saying the relevant detail here is the applicability of anthropic reasoning to the universe we actually live in: Actually using the island argument doesn't help us learn about the real world as much as looking up historic data about islands, and the SIA doomsday argument fails similarly in the face of real-world astronomy. Is this correct?

One argument against the "we're probably doomed" conclusion is that the impact of a roughly speaking "intelligent" system -- say, one that can be interpreted as maximizing some utility -- might not be recognizable to us. As an example to instantiate this possibility, if it were evolved to the point that its optimization procedure was as precise as say, the laws of physics, then we might just interpret it as a physical law.

In other words, the whole situation is really evidence that if intelligence evolves highly frequently in space and time, then it probably evolves in a way that renders it eventually undetectable.

Optimally compressed data is essentially indistinguishable from random noise. Could we tell the difference between a universe that has already been converted into computronium and one that's empty? If you had an awful lot of matter and energy and wanted to make the most powerful computing device possible, would it end up looking a lot like a star? Nuclear fusion is a great energy source, and there's lots of thermodynamic information in the atoms and molecules that make up a hot plasma.

Optimally compressed data is essentially indistinguishable from random noise. Could we tell the difference between a universe that has already been converted into computronium and one that's empty?

Optimally compressed data also has the highest possible entropy (each bit is maximally informative), so that's how a universe being used as computronium would look. So we're not in such a universe, since:

a) It has too many observable regularities (i.e. we can and do further compress it by identifying laws of physics)
b) black holes have the highest entropy per unit mass, and most of the universe isn't one.
c) ETA: entropy manages to keep increasing, so it can't be at a maximum

Optimally compressed data is essentially indistinguishable from random noise.

YES! C.E. Shannon for the WIN :)

It's a great insight, but not that hard to prove (or at least "get" the reasoning behind it): anything that distinguishes the data from randomness is redundant information, and that redundancy can be used to further compress it.

(Obviously, that doesn't count as a formal proof, &c.)

[-][anonymous]14y00

It's a great insight, but not that hard to prove: anything that distinguishes the data from randomness is redundant information, and that redundancy can be used to further compress it.

(Obviously, that doesn't count as a formal proof, &c.)

[-][anonymous]14y00

Optimally compressed data is essentially indistinguishable from random noise. Could we tell the difference between a universe that has already been converted into computronium and one that's empty?

Optimally compressed data is maximum entropy, so that's how a universe being used as computronium would look. So we're not in such a universe, since:

a) It has too many observable regularities (i.e. we further compress it by identifying laws of physics)
b) black holes have the most entropy per unit mass, and most of the universe isn't one.

I'm not too up to date on my cosmology but isn't it still theorized that dark matter accounts for more mass than visible matter? I would expect advanced civilizations to minimize energy wastage through electromagnetic emissions. I'm not sufficiently knowledgeable about physics to know whether computronium would possibly have the theorized characteristics of dark matter however.

There are lots of solutions to the Fermi Paradox, this being one of them. I'm not sure how we're supposed to judge between the various options given our lack of evidence.

Is this a factual question about our past or an invitation to solve anthropic puzzles by playing Civilization? In a world where all islands are of type 1 you will always anticipate a disaster ahead but never actually see one. Does this tell us anything about anthropic reasoning?

Now that I think about the SIA, it sounds pretty funny. The possible world where trillions of aliens will contact us tomorrow is more populous than the possible world where we are alone in the galaxy, therefore...? For extra fun, note that this reasoning will work just as well tomorrow. Similarly, Katja's interpretation of the SIA seems to imply that each passing day without a disaster brings us closer to total collapse. Funky stuff!

Well, every day that passes does bring us closer to the day when the Sun becomes a red giant and engulfs the Earth...

If, therefore, Grace’s anthropic reasoning is correct then most of the time when a prehistoric group of humans settled a large, uninhabited island, that group went extinct.

This is, strictly speakig, not true. The anthropic reasoning involves some prior probability of extinction, which can be different for island settlers and space settlers.

[-][anonymous]14y00

Anthropic reasoning can't give you the right probability estimate which depends on the distrubution of islands. In a world where all islands are of type 1, the native may anticipate all the dreadful disasters he wants and he will still be wrong.

If such things worked, you'd be able to settle anthropic puzzles with a game of Civilization or something.