Attempt at the briefest content-full Less Wrong post:

Once AI is developed, it could "easily" colonise the universe. So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed. If AI is easy, we could conceivably have built it already, or we could be on the cusp of building it. So the Great Filter must predate us, unless AI is hard.

New Comment
81 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]owencb170

There's a valid argument here, but the conclusion sounds too strong. I think the level of proof which is required for "easily colonise the universe" in this context is much higher than in the context of your other post (which is about best guess scenarios), because if there is a Great Filter then something surprising happens somewhere. So we should consider whether even quite unlikely-sounding events like "we've misunderstood astrophysics" might be possible.

I'm still highly skeptical of the existence of the "Great Filter". It's one possible explanation to the "why don't we see any hint of existence of someone else" but not the only one.

The most likely explanation to me is that intelligent life is just so damn rare. Life is probably frequent enough - we know there are a lot of exoplanets, many have the conditions for life, and life seems relatively simple. But intelligent life ? It seems to me it required a great deal of luck to exist on Earth, and it does seem somewhat likely that it's rar... (read more)

intelligent life is just so damn rare.

That's an early filter.

3Peter Wildeford
Going off of this, what if life is somewhat common, but we're just one of the first life in the universe? That doesn't seem like an "early filter", so even if this possibility is really unlikely, it still would break your dichotomy.
3bogdanb
The problem with that is that life on Earth appeared about 4 billion years ago, while the Milky Way is more than 13 billion years old. If life were somewhat common, we wouldn’t expect to be the first, because there was time for it to evolve several times in succession, and it had lots of solar systems where it could have done it. A possible answer could be that there was a very strong early filter during the first part of the Milky Way’s existence, and that filter lessened in intensity in the last few billion years. The only examples I can think of are elemental abundance (perhaps in a young galaxy there are much fewer systems with diverse enough chemical compositions) and supernova frequency (perhaps a young galaxy is sterilized by frequent and large supernovas much more often than an older one’s). But AFAIK both of those variations can be calculated well enough for a Fermi estimate from what we know, so I’d expect someone who knows the subject much better than I would have made that point already if they were plausible answers.
4TylerJay
Even within the Milky Way, most "earthlike" planets in habitable zones around sunlike stars are on average 1.8 Billion years older than the Earth. If the "heavy bombardment" period at the beginning of a rocky planet's life is approximately the same length for all rocky stars, which is likely, then each of those 11 Billion potentially habitable planets still had 1.8 billion years during which life could have formed. On Earth, life originated almost immediately after the bombardment ended and the earth was allowed to cool. Even if the probability of each planet developing life in a period of 1 Billion years is mind-bogglingly low, we still should expect to see life forming on some of them given 20 Billion Billion planet-years.
2bogdanb
How do you know? (Not rhethorical, I have no idea and I’m curious.)
2TylerJay
It was in a paper I read. Here it is
0bogdanb
Thank you, that was very interesting!
1[anonymous]
That would push many of them over into Venus mode seeing as all stars increase in brightness slowly as they age and Earth will fall over into positive greenhouse feedback mode within 2 gigayears (possibly within 500 megayears). However, seeing as star brightness increases with the 3.5th power of mass, and therefore lifetime decreases with the 2.5th power of mass, stars not much smaller than the sun can be pretty 'sunlike' while brightening much slower and having much longer stable regimes. This is where it gets confusing; are we an outlier in having such a large star (larger than 90% of stars in fact), or do these longer-lived smaller stars have something about them that makes it less likely that observers will find themselves there?
[-][anonymous]60

This degree of insight density is why I love LW.

Someone who is just scanning your headline might get the wrong idea, though: It initially read (to me) as two alternate possible titles, implying that the filter is early and AI is hard and these two facts have a common explanation (when the actual content seems to be "at least one of these is true, because otherwise the universe doesn't make sense").

4satt
Yeah, I'd add the word "Either" to the start of the post's title.
[-][anonymous]60

Once AI is developed, it could "easily" colonise the universe.

I dispute this assumption. I think it is vanishingly unlikely for anything self-replicating (biological, technological, or otherwise) to survive trips from one island-of-clement-conditions (~ 'star system') to another.

9owencb
Nick Beckstead wrote up an investigation into this question, with the conclusion that current consensus points to it being possible.
5Stuart_Armstrong
http://lesswrong.com/lw/hll/to_reduce_astronomical_waste_take_your_time_then/ : six hours of the sun's energy for every galaxy we could ever reach, at a redundancy of 40. Give a million years, we can blast a million probes per star at least. Some will get through.
[-][anonymous]110

6 hours of the sun's energy, or 15 billion years worth of current human energy use (or only a few trillion years human energy use in the early first millennium, it really was not exponential until the 19th/20th century and these days its more linear). The only way you get energy levels that high is with truly enormous stellar-scale engineering projects like Dyson clouds, which we see no evidence of when we look out into the universe in infrared - those are something we would actually be able to see. Again, if things of that sheer scale are something that intelligent systems don't get around to doing for one reason or another, then this sort of project would never happen.

Additionally, the papers referenced there have 'seed' masses sent to other GALAXIES massing grams with black-box arbitrary control over matter and the capacity to last megayears in the awful environment of space. Pardon me if I don't take that possibility very seriously, and adjust the energy figures up accordingly.

4James_Miller
Really, you think that if our civilization survives another million years we won't be able to do this? At the very least we could freeze human embryos, create robots that turn the embryos into babies and then raise them, put them all on a slow star ship and send the ship to an earth like planet.
[-][anonymous]100

I think it's quite unlikely, yes.

It seems like a natural class of explanations for the fermi paradox, one which I am always surprised never gets more people coming up with it. Most people pile into 'intelligent systems almost never appear' or 'intelligent systems have extremely short lifespans'. Why not 'intelligent systems find it vanishingly difficult to spread beyond small islands'? It seems more reasonable to me than either of the two previous ones, as it is something that we haven't seen intelligent systems do yet (we are an example of one both arising and sticking around for a long time).

If I must point out more justification than that, I would immediately go with:

1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.

2 - All self-replicating systems on earth live in a veritable bath of materials and energy they can draw on; a long-haul space ship has to either use literally astronomical energy at the source and destination to change velocity, or 'live' off only catabolizing itself in an incredibly hostile environment for millennia at least whil... (read more)

7Kyre
Voyagers 1 and 2 were launched in 1977, are currently 218 and 105 AU from the Sun, and are both are still communicating. They were designed to reach Jupiter and Saturn - Voyager 2 had mission extensions to Uranus and Neptune (interestingly, it was completely reprogrammed after the Saturn encounter, and now makes use of communication codes that hadn't been invented when it was launched). Pioneers 10 and 11 were launched in 1972 and 73 and remained in contact until 2003 and 1995 respectively, with their failure being due to insufficient power for communication coming from their radioisotope power sources. Pioneer 10 stayed in communication to 80 AU. New Horizons was launched in 2006 and is still going (encounter with Pluto next year). So, 3 out of 5 probes designed to explore the outer solar system are still going, 2 with 1970s technology.
1[anonymous]
The voyagers are 128 and 104 AUs out upon me looking them up - looks like I missed Voyager 2 hitting the 100 AU mark about a year and a half ago. Still get what you are saying. Still not convinced that all that much has been done in the realm of spacecraft reliability recently aside from avoiding moving parts and having lots of redundancy, they have major issues quite frequently. Additionally all outer solar system probes are essentially rapidly catabolizing plutonium pellets they bring along for the ride with effective lifetimes in decades before they are unable to power themselves and before their instruments degrade from lack of active heating and other management that keeps them functional.
3owencb
Thanks for the link to the paper with the percolation model. I think it's interesting, but the assumption of independent probabilities at each stage seems relatively implausible. You just need one civilization to hit upon a goal-preserving method of colonisation and it seems the probability should stick high.
3James_Miller
OK, but even if you are right we know it's possible to send radio transmissions to other star systems. Why haven't we detected any alien TV shows?
[-][anonymous]120

Because to creatures such as us that have only been looking for a hundred years with limited equipment, a relatively 'full' galaxy would look no different from an empty one.

Consider the possibility that you have about 10,000 intelligent systems that can use radio-type effects in our galaxy (a number that I think would likely be a wild over-estimation given the BS numbers I occasionally half-jokingly calculate given what I know of the evolutionary history of life on Earth and cosmology and astronomy, but it's just an example). That puts each one, on average, in an otherwise 'empty' cube 900 light years on a side that contains millions of stars. EDIT: if you up it to a million intelligent systems, the cube only goes down to about 200 light years wide with just under half a million stars, I just chose 10,000 because then the cube is about the thickness of the galaxy's disc and the calculation was easy.

We would be unable to detect Earth's own omnidirectional radio leaks less than a light year away according to figures I have seen, and since omnidirectional signals decrease with the square of distance even to be seen 10 light years away you would need hundreds of times as much. Seein... (read more)

1chaosmage
You're forgetting self-replicating colony ships. Factories can already self-replicate given human help. They'll probably be able to do it on their own inside the next two decades. After that, we're looking at self-replicating swarms of drones that tend to become smaller and smaller, and eventually they'll fit on a spaceship and spread across the galaxy like a fungus, eating planets to make more drones. That doesn't strictly require AGI, but AGI would have no discernible reason not to do that, and this has evidently not happened because we're here on this uneaten planet.
2Gunnar_Zarncke
I also see this claimed often but my best guess also is that this might likely be the hard part. Getting into space is already hard. Fusion could be technologically impossible (or not energy positive).
2calef
Fusion is technologically possible (c.f., the sun). It just might not be technologically easy.
1Gunnar_Zarncke
The sun is not technology (=tools, machinery, modifications, arrangements and procedures)
1TheMajor
It seems like there is steady progress at the fusion frontiers
8satt
Though in the case of ITER the "steady progress" is finishing pouring concrete for the foundations, not tweaking tokamak parameters for higher gain!
1Stuart_Armstrong
Fission is sufficient.
3Gunnar_Zarncke
Is this an opinion or a factual statement. If the latter I'd like to see some refs.
3Stuart_Armstrong
http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf
3Gunnar_Zarncke
Thank you. An interesting read. I found your treatment very thorough given its premises and approach. Sadly we disagree at a point which you seem to take as given without further treatment but which I question: The ability and energy to set-up infrastructure to exploit interplanetary resources with sufficient net energy gain to sufficiently mine mercury (much less build a dyson sphere). The problem here is that I do not have refereneces to actually back my opinion on this and I didn't have enough time yet to build my complexity theoretic and thermodynamics arguments into a sufficiently presentable form. http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/
3Stuart_Armstrong
We already have solar panel setups with roughly the required energy efficiency.
[-]Gav50

Tongue in cheek thought that just popped into my head: There is no great filter, and we are actually seeing intelligence everywhere because it turns out dark matter is just a really advanced form of computronium.

Or, the simulation is running other star systems at a lower fidelity. Or, Cosmic Spirits from Beyond Time imposed life on our planet via external means, and abiogenesis is actually impossible. The application of sufficiently advanced intelligence may be indistinguishable from reality.

It's also possible that AI used to be hard but no longer is because something in the universe recently changed. Although this seems extremely unlikely, The Fermi paradox implies that something very unlikely is indeed occurring.

4A1987dM
Not that unlikely, depending on what you mean by “recently”; for example, earlier stars had lower metallicity and hence were less likely to have rocky planets.
3V_V
Or space colonization is just hard.
7Stuart_Armstrong
The evidence seems to be that it's "easy" (see http://lesswrong.com/lw/hll/to_reduce_astronomical_waste_take_your_time_then/ ), at least over the thousand-million year range.

Good job condensing the argument down.

You are missing at least two options.

First, our knowledge of physics is far from complete, and there can be some reasons that make interstellar colonization just impossible.

Second, consider this: our building technology is vastly better than it was few thousands years ago, and our economic capabilities are much greater. Yet, noone among last century rulers was buried in tomb comparable to Egyptian pyramids. The common reply is that it takes only one expansionist civilization to take over the universe. But number of civilizations is finite, and colonization can be so unattractive that number of expansionists is zero.

Has anybody suggested that the great filter may be that AIs are negative utilitarians that destroy life on their planet? My prior on this is not very high but it's a neat solution to the puzzle.

5MugaSofer
Oh, a failed Friendly AI might well do that. But it would probably realize that life would develop elsewhere, and take steps to prevent us.
2Stuart_Armstrong
And then it goes on to destroy all life in the universe...

So the Great Filter must predate us, unless AI is hard.

There's a 3rd possibility: AI is not super hard, say 50 yrs away, but species tend to get filtered when they are right on the verge of developing AI. Which points to human extinction in the next 50 years or so.

This seems a little unlikely. A filter that only appeared on the verge of AI would likely be something technology-related. But different civs explore the tech tree differently. This only feels like a strong filter if the destructive tech was directly before superintelligence on the tree. ... (read more)

0Stuart_Armstrong
But we also have to consider whether it's a true exit route, which would avoid the deadly traps that we don't understand.

Once AI is developed, it could "easily" colonise the universe.

I was wondering about that. I agree with the could, but is there a discussion of how likely it is that it would decide to do that?

Let’s take it as a given that successful development of FAI will eventually lead to lots of colonization. But what about non-FAI? It seems like the most “common” cases of UFAI are mistakes in trying to create an FAI. (In a species with similar psychology to ours, a contender might also be mistakes trying to create military AI, and intentional creation by... (read more)

4VAuroch
Energy acquisition is a useful subgoal for nearly any final goal and has non-starsystem-local scope. This makes strong AIs which stay local implausible.
2randallsquared
Especially if the builders are concerned about unintended consequences, the final goal might be relatively narrow and easily achieved, yet result in the wiping out of the builder species.
1bogdanb
If the final goal is of local scope, energy acquisition from out-of-system seems to be mostly irrelevant, considering the delays of space travel and the fast time-scales a strong AI seems likely to operate at. (That is, assuming no FTL and the like.) Do you have any plausible scenario in mind where an AI would be powerful enough to colonize the universe, but do it because it needs energy for doing something inside its system of origin? I might see one perhaps extending to a few neighboring systems in a very dense cluster for some strange reason, but I can’t imagine likely final goals (again, for its birth star-system) that it would need to spend hundreds of millenia even to take over a single galaxy, let alone leave it. (Which is of course no proof there isn’t; my question above wasn’t rhethorical.) I can imagine unlikely accidents causing some sort of papercliper-scenario, and maybe vanishingly rare cases where two or more AIs manage to fight each other over long periods of time, but it’s not obvious to me why this class of scenarios should be assigned a lot of probability mass in aggregate.
3VAuroch
Any unbounded goal in the vein of 'Maximize concentration of in this area' has local scope but potentially unbounded expenditure necessary. Also, as has been pointed out for general satisficing goals (which most naturally local-scale goals will be); acquiring more resources lets you do the thing more to maximize the chances that you have properly satisfied your goal. Even if the target is easy to hit, being increasingly certain that you've hit it can use arbitrary amounts of resource.
0bogdanb
Both good points, thank you.
1Magnus Anderson
Alternatively, a weapon-AI builds a dyson sphere, preventing any light from the star from escaping, eliminating the risk of a more advanced outside AI (which it can reason about much better than we can) from destroying it. Or a poor planet-local AI does the same thing.

Or, conversely, Great Filter doesn't prevent civilizations from colonising galaxies, and we've been colonised long time ago. Hail Our Alien Overlords!

And I'm serious here. Zoo hypothesis seems very conspiracy-theory-y, but generalised curiosity is one of the requirments for developing civ capable of galaxy colonisation, and powerful enough civ can sacrifice few star systems for research purposes, and it seem that most efficient way of simulating biological evolution or civ developement is actually having a planet develop on its own.

2A1987dM
See the last paragraph of this.

I'd like to repeat the comment I had made at "outside in" for the same topic, the great filter.

I think our knowledge of all levels – physics, chemistry, biology, praxeology, sociology is nowhere near the level where we should be worrying too much about the fermi paradox.

Our physics has openly acknowledged broad gaps in our knowledge by postulating dark matter, dark energy, and a bunch of stuff that is filler for – "I don’t know". We don't have physics theories that explain the smallest to the largest.

Coming to chemistry and biology, w... (read more)

5MugaSofer
If you aren't sure about something, you can't just throw up your hands, say "well, we can't be sure", and then behave as if the answer you like best is true. We have math for calculating these things, based on the probability different options are true. For example, we don't know for sure how abiogenesis works, as you correctly note. Thus, we can't be sure how rare it ought to be on Earthlike planets - it might require a truly staggering coincidence, and we would never know for anthropic reasons. But, in fact, we can reason about this uncertainty - we can't get rid of it, but we can quantify it to a degree. We know how soon life appeared after conditions became suitable. So we can consider what kind of frequency that would imply for abiogenesis given Earthlike conditions and anthropic effects. This doesn't give us any new information - we still don't know how abiogenesis works - but it does give us a rough idea of how likely it is to be nigh-impossible, or near-certain. Similarly, we can take the evidence we do have about the likelihood of Earthlike planets forming, the number of nearby stars they might form around, the likely instrumental goals most intelligent minds will have, the tools they will probably have available to them ... and so on. We can't be sure about any of these things - no, not even the number of stars! - but we do have some evidence. We can calculate how likely that evidence would be to show up given the different possibilities. And so, putting it all together, we can put ballpark numbers to the odds of these events - "there is a X% chance that we should have been contacted", given the evidence we have now. And then - making sure to update on all the evidence available, and recalculate as new evidence is found - we can work out the implications.

Alternatively the only stable AGI has a morality that doesn't make it behave in a way where it simply colonises the whole universe.

1Stuart_Armstrong
Not colonising the universe - many moralities could go with that. Allowing potential rivals to colonise the universe... that's much rarer.

What if there is something that can destroy entire universe, and sufficiently advanced civilization eventually does it?

Another possibility is that AI wipes us out and is also not interested in expansion. 

Since expansion is something inherent to living beings, and AI is a tool built by living beings, it wouldn't make sense for its goals not to include expansion of some kind (i.e. it would always look at the universe with sighing eyes, thinking of all the paperclips that represents). But perhaps in an attempt to keep AI in line somehow we would constrain it to a single stream of resources? In which case it would not be remotely interested in anything outside of Earth?&n... (read more)

3Stuart_Armstrong
The kind of misalignment that would have AI kill humanity - the urge for power, safety, and resources - is the same kind that would cause expansion.
1Neil
AI could eliminate us in its quest to achieve a finite end, and would not necessarily be concerned with long-term personal survival. For example, if we told an AI to build a trillion paperclips, it might eliminate us in the process then stop at a trillion and shut down.  Humans don't shut down after achieving a finite goal because we are animated by so many self-editing finite goals that there never is a moment in life where we go "that's it. I'm done". It seems to me that general intelligence does not seek a finite, measurable and achievable goal but rather a mode of being of some sorts. If this is true, then perhaps AGI wouldn't even be possible without the desire to expand, because a desire for expansion may only come with a mode-of-being oriented intelligence rather than a finite reward-oriented intelligence. But I wouldn't discount the possibility of a very competent narrow AI turning us into a trillion paperclips.  So narrow AI might have a better chance at killing us than AGI. The Great Filter could be misaligned narrow AI. This confirms your thesis. 

There is a third alternative. Observed universe is limited, the probability of life arising from non-living matter is low, suitable planets are rare, evolution doesn't directly optimize for intelligence. Civilizations advanced enough to build strong AI are probably just too rare to appear in our light cone.

We could have already passed the 'Gread Filter' by actually existing in the first place.

3Stuart_Armstrong
Yes, that's exactly what an early great filter means.
[-][anonymous]10

So the Great Filter (preventing the emergence of star-spanning civilizations) must strike before AI could be developed.

Maybe AI is the Great Filter, even if it is friendly.

The friendly AI could determine that colonization of what we define as “our universe” is unnecessary or detrimental to our goals. Seems unlikely, but I wouldn’t rule it out.

2Stuart_Armstrong
Not colonising the universe - many moralities could go with that. Allowing potential rivals to colonise the universe... that's much rarer.
0[anonymous]
This is necessarily very personal, but I have a hard time seeing how settlement of the cosmos does not follow from things I hold as terminal, such as buliding diversity of conscious experience.

A self-improving intelligence can indeed be the Great Filter, that is, if it has already reached us from elsewhere, potentially long ago.

Keeping in mind that the delta between "seeing evidence of other civilizations in the sky" (= their light-speeded signals reaching us) and "being enveloped by another civilization from the sky" (= being reached by their near-light-speeded probes) is probably negligible (order of 10^3 years being conservative?).

Preventing us from seeing evidence, or from seeing anything at all which it doesn't want us to see, would be trivial.

Yes, I'm onto you.

or we could be on the cusp of building it

It's not a negligible probability that this is in fact the case. Some would know this fact from a closer perspective, most would not.

It seems possible to me that if many intelligences reached our current stage, at least a few would have neurological structures that were relatively easy to scan and simulate. This would amount to AI being "easy" for them (in the form of ems, which can be sped up, improved, etc.)

I think we can file this under AI is hard because you have to create an intelligence that is so vast that it can have as close to apriori knowledge of its existence as we do. While I agree that once that intelligence exists that it may wish/want to control such vast resources and capability that it could quickly bring together the resources, experts, and others to create exploratory vehicles to begin the exploration and mapping process of the universe. However, I think you also have to realize that life happens and our Universe is a dynamic place. Ships wo... (read more)

[-][anonymous]-10

Or the origin of life is hard.

Or the evolution of multicellular life is hard.

Or the evolution of neural systems is hard.

Or the breakaway evolution of human-level intelligence is hard.

(These are not the same thing as an early filter.)

Or none of that is hard, and the universe is filled with intelligences ripping apart galaxies. They are just expanding their presence at near the speed of light, so we can't see them until it is too late.

(These are not the same thing as an early filter.)

Why not? I thought the great filter was anything that prevented ever-expanding intelligence visibly modifying the universe, usually with the additional premise that most or all of the filtering would happen at a single stage of the process (hence 'great').

Or none of that is hard, and the universe is filled with intelligences ripping apart galaxies. They are just expanding their presence at near the speed of light, so we can't see them until it is too late.

If they haven't gotten here yet at near-lightspeed, that means their origins don't lie in our past; the question of the great filter remains to be explained.

-2[anonymous]
Evolution is not an ever-expanding intelligence. Before the stage of recursively improving intelligence (human-level, at minimum), I wouldn't call it a filter. But maybe this is an argument about definitions.
[-]Wes_W170

You do seem to be using the term in a non-standard way. Here's the first use of the term from Hanson's original essay:

[...]there is a "Great Filter" along the path between simple dead stuff and explosive life.

The origin and evolution of life and intelligence are explicitly listed as possible filters; Hanson uses "Great Filter" to mean essentially "whatever the resolution to the Fermi Paradox is". In my experience, this also seems to be the common usage of the term.

3James_Miller
If this is true than almost all civilizations at our stage of development would exist in the early universe and the Fermi paradox becomes "why is our universe so old?"
5[anonymous]
No, that's not at all obvious. We have absolutely no idea how hard it is to evolve intelligent life capable of creating recursive self-improving machine intelligence. I named a number of transitions in my post which may in fact have been much, much more difficult than we are giving credit. The first two probably aren't, given what I know about astrobiology. Life is probably common, and multicellular life arose independently on Earth at least 46 times, so there's probably not something difficult there that we're missing. I don't know anything about the evolutionary origins of neurons, so I won't speculate there. The evolution of human-level intelligence, however, in a species capable of making and using tools really does seem to have been a freak accident. It happened on Earth only because a tribe of social primates got stuck for thousands of years under favorable but isolated living conditions due to ecological barriers, and vicious tribal politics drove run-away expansion of the cerebral cortex. The number of things which had to go just right to make that occur as it actualy happened makes it a very unlikely event. Let me rephrase what I just said: there are pressures at work which cause evolution to actually select against higher intelligence. It is only because of an ecological freak accident that isolated a tribe of tool-making social primates in a series of caves on a sea cliff on the coast of Africa for >50,000 years where they were allowed to live in relative comfort making the prime selection pressure mastery of tribal politics, which led to expansion of the general intelligence capability of the cerebral cortex, which gave us modern humans. There are so many factors which had to go just right there that I'd say that is a very likely candidate for a great filter. But it's all a moot point. We would expect a post-singularity expansionist intelligence to spread throughout the universe at close to the speed of light. Why don't we see other intelligences? Bec
2James_Miller
It is my understanding that if we have priors we absolutely must inject them onto the universe to formulate the best possible mental map of it. But we do have lots of information about this. For example, the reason is not that earth is the only planet in our galaxy. And we have the potential of gaining lots more information such as if we find extraterrestrial life. I'm sure you don't mean to imply that if we do not have a complete understanding of a phenomenon we must ignore that phenomenon when formulating beliefs.
3[anonymous]
What I mean is that you are injecting assumptions about how you came to be a conscious entity on Earth in whatever year your mother conceived you, as opposed to any other place or time in the history of the universe. Maybe it's true that for every sentient being you assign equal probability and then it would look very odd indeed that you ended up in a early stage civilization in an old, empty universe. Or, maybe coherent worlds count as a single state, therefore greater probability mass is given to later civilizations which exist in a good deal many more Everett branches. Or more likely it is something completely different. The very idea that I was 'born into' a sentient being chosen at random stinks of Cartesian dualism when I really look at it. It seems very unlikely to me that the underlying mechanism of the universe is describable at that scale.