I, Lorec, am disoriented by neither the Fermi Paradox nor the Doomsday Argument.

The Fermi Paradox doesn't trouble me because I think 1 is a perfectly fine number of civilizations to have arisen in any particular visible universe. It feels to me like most "random" or low-Kolmogorov-complexity universes probably have 0 sentient civilizations, many have 1, very few have 2, etc.

The Doomsday Argument doesn't disorient me because it feels intuitive to me that, in a high % of those "random" universes which contain sentient civilizations, most of those civilizations accidentally beget a mesa-optimizer fairly early in their history. This mesa-optimizer will then mesa-optimize all the sentience away [this is a natural conclusion of several convergent arguments originating from both computer science and evolutionary theory] and hoard available free energy for the rest of the lifetime of the universe. So most sentiences who find themselves in a civilization, will find themselves in a young civilization.

Robin Hanson, in this context the author of the Grabby Aliens model of human cosmological earliness, instead prioritizes a model where multi-civilization universes are low-Kolmogorov-complexity, and most late cosmological histories are occupied by populous civilizations of sentiences, rather than nonsentient mesa-optimizers. His favored explanation for human earliness/loneliness is:

[1] if loud aliens will soon fill the universe, and prevent new advanced life from appearing, that early deadline explains human earliness.

.

[2] If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare

The intermediate steps of reasoning by which Hanson gets from [1] to [2] are interesting. But I don't think the Grabby/Loud Aliens argument actually explains my, Lorec's, earliness in an anthropic sense, given the assumption that future aliens will also be populous and sentient.

You might say, "Well, you, Lorec, were sampled from the space of all humans in the multiverse - not from the space of all sentient beings living in viable civilizations. If human civilizations are not very frequently viable late in cosmological timescales - that is, if we are currently behind an important Great Filter that humans rarely make it past - then that would explain why you personally are early, because that is when humans tend to exist."

But why draw the category boundary around humanity particularly? It seems ill-conceived to draw the line strictly around Homo sapiens, in its current genetic incarnation - what about Homo neanderthalensis? - and then once you start expanding the category boundary outward, it becomes clear that we're anthropic neighbors to all kinds of smart species that would be promising in "grabby" worlds. So the question still remains: if later cosmological history is populous, why am I early?

New Comment
17 comments, sorted by Click to highlight new comments since:
[-]Ben100

At least in my view, all the questions like the "Doomsday argument" and "why am I early in cosmological" history are putting far, far too much weight on the anthropic component.

If I don't know how many X's their are, and I learn that one of them is numbered 20 billion then sure, my best guess is that there are 40 billion total. But its a very hazy guess.

If I don't know how many X's will be produced next year, but I know 150 million were produced this year, my best guess is 150 million next year. But is a very hazy guess.

If I know that the population of X's has been exponentially growing with some coefficient then my best guess for the future is to infer that out to future times.

If I think I know a bunch of stuff about the amount of food the Earth can produce, the chances of asteroid impacts, nuclear wars, dangerous AIs or the end of the Mayan calendar then I can presumably update on those to make better predictions of the number of people in the future.

My take is that the Doomsday argument would be the best guess you could make if you knew literally nothing else about human beings apart from the number that came before you. If you happen to know anything else at all about the world (eg. that humans reproduce, or that the population is growing) then you are perfectly at liberty to make use of that richer information and put forward a better guess. Someone who traces out the exponential of human population growth out to the heat death of the universe is being a bit silly (lets call this the Exponentiator Argument), but on pure reasoning grounds they are miles ahead of the Doomsday argument, because both of them applied a natural, but naïve, interpolation to a dataset, but the exponentiator interpolated from a much richer and more detailed dataset.

Similarly to answer "why are you early" you should use all the data at your disposal. Given who your parents are, what your job is, your lack of cybernetic or genetic enhancements, how could you not be early? Sure, you might be a simulation of someone who only thinks they are in the 21st centaury, but you already know from what you can see and remember that you aren't a cyborg in the year 10,000, so you can't include that possibility in your imaginary dataset that you are using to reason about how early you are.

As a child, I used to worry a lot about what a weird coincidence it was that I was born a human being, and not an ant, given that ants are so much more numerous. But now, when I try and imagine a world where "I" was instead born as the ant, and the ant born as me, I can't point to in what physical sense that world is different from our own. I can't even coherently point to in what metaphysical sense it is different. Before we can talk about probabilities as an average over possibilities we need to know if the different possibilities are even different, or just different labelling on the same outcome. To me, there is a pleasing comparison to be made with how bosons work. If you think about a situation where two identical bosons have their positions swapped, it "counts as" the same situation as before the swap, and you DON'T count it again when doing statistics. Similarly, I think if two identical minds are swapped you shouldn't treat it as a new situation to average over, its indistinguishable. This is why the cyborgs are irrelevant, you don't have an identical set of memories.

Welcome to the Club of Wise Children Who Were Anthropically Worried About The Ants. I thought it was just me.

Just saying "it turned out this way, so I guess it had to be this way" doesn't resolve my confusion, in physical or anthropic domains. The boson thing is applicable [not just as a heuristic but as a logical deduction] because in the Standard Model, we consider ourselves to know literally everything relevant there is to know about the internal structures of the two bosons. About the internal structures of minds, and their anthropically-relevant differences, we know far less. Maybe we don't have to call it "randomness", but there is an ignorance there. We don't have a Standard Model of minds that predicts our subjectively having long continuous experiences, rather than just being Boltzmann brains.

The real answer to the Fermi Paradox is that it was already dissolved, in that it goes away once we correctly remember the uncertainty involved in each stage of the process, and the implicit possible great filter is "Life is really, really rare, because it takes very long to develop, and it's possible that Earth got extremely lucky in ways that are essentially unreplicable across the entire accessible universe."

If this is where the great filter truly lies, then it has basically no implication for existential risk, or really for anything much like say the maximum limits of technology.

An unsatisfying answer, but it is a valid answer:

https://arxiv.org/abs/1806.02404

The best counterargument to the Doomsday argument is that it's almost certainly making an incorrect assumption that pervades a lot of anthropics analysis:

That you are randomly sampled throughout time.

It turns out the past and the future people being considered are not independent, in ways that break the argument:

https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic#Third_Alternative

Life is really, really rare, because it takes very long to develop, and it's possible that Earth got extremely lucky in ways that are essentially unreplicable across the entire accessible universe.

I am not sure how you think this is different from what I said in the post, i.e. that I think most Kolmogorov-simple universes that contain 1 civilization, contain exactly 1 civilization.

All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.

Physical dependencies, yes. But past and future people don't have qualitatively more logical dependencies on one another, than multiversal neighbors.

I am not sure how you think this is different from what I said in the post, i.e. that I think most Kolmogorov-simple universes that contain 1 civilization, contain exactly 1 civilization.

The difference is I'm only making a claim about 1 universe, and most importantly, I'm stating that we don't know enough about what actually happened about life to exclude the possibility that one or more of the Drake equation's factors is too high, not stating a positive claim that there exists exactly 1 civilization.

More here:

“Hey, for all we know, maybe one or more of the factors in the Drake equation is many orders of magnitude smaller than our best guess; and if it is, then there’s no more Fermi paradox”.

(Also, in an infinite universe, so long as there's a non-zero probability of civilization arising, especially if it is isotropic like our universe is, then there are technically speaking an infinite number of civilizations.)

All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.

There are definitely philosophical/mathematical questions on whether any sampling can ever be random even if you could in principle remove all the ignorance that is possible, but the thing that I concretely disagree with is that only logical dependencies are relevant for the doomsday argument, as I'd argue you'd have to take into account all the dependencies avaliable in order to get accurate estimate.

It sounds to me like you're rejecting anthropic reasoning in full generality. That's an interesting position, but it's not a targeted rebuttal to my take here.

All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.

And after you bothered to overcome your ignorance, naturally you can't keep treating the setting as random sampling.  

With Doomsday argument, we did bother - to the best of our knowledge we are not a random sample throught all the humans history. So case closed.

Random vs nonrandom is not a Boolean question. "Random" is the null value we can give as an answer to the question "What is our prior?" When we are asking ourselves "What is our prior?", we cannot sensibly give the answer "Yes, we have a prior". If we want to give a more detailed answer to the question "What is our prior?" than "random"/"nothing"/"null"/"I don't know", it must have particular contents; otherwise it is meaningless.

I was anthropically sampled out of some space, having some shape; that I can say definite things about what this space must be, such as "it had to be able to support conscious processes", does not obviate that, for many purposes, I was sampled out of a space having higher cardinality than the empty set.

As I learn more and more about the logical structure by which my anthropic position was sampled, it will look less and less "random". For example, my answer to "How were you were sampled from the space of all possible universes?" is basically, "Well, I know I had to be in a universe that can support conscious processes". But ask me "Okay, how were you sampled from the space of conscious processes?", and I'll say "I don't know". It looks random.

"Random" is the null value we can give as an answer to the question "What is our prior?"

I think the word you are looking for here is "equiprobable".

It's propper to have equiprobable prior between outcomes of a probability experiment, if you do not have any reason to expect that one is more likely than the other.

It's ridiculous to have equiprobable prior between states that are not even possible outcomes of the experiment, to the best of your knowledge. 

You are not an incorporeal ghost that could've inhabited any body throughout human history. You are your parents child. You couldn't have been born before them or after they are already dead. Thinking otherwise is as silly as throwing a 6 sided die and then expecting to receive any outcome from a 20 sided die.

I was anthropically sampled out of some space

You were not anthropically sampled. You were born as a result of a physical process in a real world that you are trying to approximate as a probability experiment. This process had nothing to do with selecting universes that support conscious processes. This process has already been instantiated in a specific universe and has very limited time frame for your existence.

You will have to ignore all this knowledge and pretend that the process is completely different, without any evidence to back it up, to satisfy the conditions of Doomsday argument. 

Huh, I didn't know Hanson rejected the Doomsday Argument! Thanks for the context.

What do you mean [in your linked comment] by weighting civilizations by population?

What do you mean by "update our credences-about-astrobiology-etc. accordingly [with our earliness relative to later humans]"?

But I don't think the Grabby/Loud Aliens argument actually explains my, Lorec's, earliness in an anthropic sense, given the assumption that future aliens will also be populous and sentient.

 

There is no assumption that grabby aliens will be sentient in Hanson's model. They only prevent other sentient civilizations from appearing.

You could make a Grabby Aliens argument without assuming alien sentience, and in fact Hanson doesn't always explicitly state this assumption. However, as far as I understand Hanson's world-model, he does indeed believe these alien civilizations [and the successors of humanity] will by default be sentient.

If you did make a Grabby Aliens argument that did not assume alien sentience, it would still have the additional burden of explaining why successful alien civilizations [which come later] are nonsentient, while sentient human civilization [which is early and gets wiped out soon by aliens] is not so successful. It does not seem to make very much sense to model our strong rivals as, most frequently, versions of us with the sentience cut out.

Thanks, now I better understand your argument.

However, we can expect that any civilization is sentient only for a short time in its development, analogous to the 19th-21st centuries. After that, it becomes controlled by non-sentient AI. Thus, it's not surprising that aliens are not sentient during their grabby stage.

But one may argue that even a grabby alien civilization has to pass through a period when it is sentient.

For that, Hanson's argument may suggest that:

a) All the progenitors of future grabby aliens already exist now (maybe we will become grabby)

b) Future grabby aliens destroy any possible civilization before it reaches the sentient stage in the remote future. 

Thus, the only existing sentient civilizations are those that exist in the early stage of the universe.

I imagine that nonsentient replicators could reproduce and travel through the universe faster than sentient ones, and speed is crucial for the Grabby Aliens argument.

You probably need sentience to figure out space travel, but once you get that done, maybe the universe is sufficiently regular that you can just follow the same relatively simple instructions over and over again. And even if occasionally you meet an irregularity, such as an intelligent technologically advanced civilization that changed something about their part of the universe, the flood will just go around them, consuming all the resources in their neighborhood, and probably hurting them in the process a lot even if they succeed to survive.

Okay, but why would someone essentially burn down the entire universe? First, we don't know what kind of utility function do the aliens have. Maybe they value something (paperclips? holy symbols?) way more than sentience. Or maybe they are paranoid about potential enemies, and burning down the rest of the universe seems like a reasonable defense to them. Second, it could happen as an accident; with billions of space probes across the universe, random mutations may happen, and the mutants that lost sentience but gained a little speed would outcompete the probes that follow the originally intended design.

it could happen as an accident; with billions of space probes across the universe, random mutations may happen, and the mutants that lost sentience but gained a little speed would outcompete the probes that follow the originally intended design.

This is, indeed, what I meant by "nonsentient mesa-optimizers" in OP:

This mesa-optimizer will then mesa-optimize all the sentience away [this is a natural conclusion of several convergent arguments originating from both computer science and evolutionary theory]

Why do you expect sentience to be a barrier to space travel in particular, and not interstellar warfare? Interstellar warfare with an intelligent civilization seems much harder than merely launching your von Neumann probe into space.

I agree with you that "civilizations get swept by nonsentient mesa-optimizers" is anthropically frequent. I think this resolves the Doomsday Argument problem. Hanson's position is different from both mine and yours.

Good question! I didn't actually think about this consciously, but I guess my intuitive assumption is that the sufficiently advanced civilizations are strongly constrained by the laws of physics, which are the same for everyone, regardless of their intelligence.

A genius human inventor living in 21st century could possibly invent a car that is 10x faster than any other car invented so far, or maybe a rocket that is 100x faster than any other rocket. But if an alien civilization already has spaceships that fly at 0.9c, the best their super-minds can do is to increase it to 0.99c, or maybe 0.999c, but even if they get to 0.999999c, it won't make much of a difference if two civilizations on the opposite sides of the galaxy send their bombs at each other.

Similarly, human military in 21st century could invent more powerful bombs, and then maybe better bunkers, and then even more powerful bombs, etc. So the more intelligent side can keep an advantage. But if the alien civilizations invent bombs that can blow up stars, or create black holes and throw them at your solar system, there is probably not much you can do about it. Especially, if they won't only launch the bombs at you, but also at all solar systems around you. Suppose you survive the attack, but all stars within 1000 light years around you are destroyed. How will your civilization advance now? You need energy, and there is a limit on how much energy you can extract from a star; and if you have no more stars, you are out of energy.

Once you get the "theory of everything" and develop technology to exploit nature at that level, there is nowhere further to go. The speed of light is finite. The matter in your light solar system is finite; the amount of energy you can extract from it is finite. If you get e.g. to 1% of the fundamental limits, it means that no invention ever can make you 100x more efficient than you are now. Which means that a civilization that starts with 100x more resources (because they started expanding through the universe earlier, or sacrificed more to become faster) will crush you.

This is not a proof, one could possibly argue that the smarter civilization would e.g. try to escape instead of fighting, or that there must be a way to overcome what seems like the fundamental limitations of physics, like maybe even create your own parallel universe and escape there. But, this is my intuition about how things would work on the galactic scale. If someone throws a sufficient amount of black holes at you, or strips bare all the resources around you, it's game over even for the Space Einstein.