You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Fermi paradox of human past, and corresponding x-risks

6 turchin 01 October 2016 05:01PM

Based on known archaeological data, we are the first technological and symbol-using civilisation on Earth (but not the first tool-using species). 
This leads to an analogy that fits Fermi’s paradox: Why are we the first civilisation on Earth? For example, flight was invented by evolution independently several times. 
We could imagine that on our planet, many civilisations appeared and also became extinct, and based on mediocre principles, we should be somewhere in the middle. For example, if 10 civilisations appeared, we have only a 10 per cent chance of being the first one.

The fact that we are the first such civilisation has strong predictive power about our expected future: it lowers the probability that there will be any other civilisations on Earth, including non-humans or even a restarting of human civilisation from scratch. It is because, if there will be many civiizations, we should not find ourselves to be the first one (It is some form of Doomsday argument, the same logic is used in Bostrom's article “Adam and Eve”).

If we are the only civilisation to exist in the history of the Earth, then we will probably become extinct not in mild way, but rather in a way which will prevent any other civilisation from appearing. There is higher probability of future (man-made) catastrophes which will not only end human civilisation, but also prevent any existence of any other civilisations on Earth.

Such catastrophes would kill most multicellular life. Nuclear war or pandemic is not that type of a catastrophe. The catastrophe must be really huge: such as irreversible global warming, grey goo or black hole in a collider.

Now, I will list possible explanations of the Fermi paradox of human past and corresponding x-risks implications:

 

1. We are the first civilisation on Earth, because we will prevent the existence of any future civilisations.

If our existence prevents other civilisations from appearing in the future, how could we do it? We will either become extinct in a very catastrophic way, killing all earthly life, or become a super-civilisation, which will prevent other species from becoming sapient. So, if we are really the first, then it means that "mild extinctions" are not typical for human style civilisations. Thus, pandemics, nuclear wars, devolutions and everything reversible are ruled out as main possible methods of human extinction.

If we become a super-civilisation, we will not be interested in preserving biosphera, as it will be able to create new sapient species. Or, it may be that we care about biosphere so strongly, that we will hide very well from new appearing sapient species. It will be like a cosmic zoo. It means that past civilisations on Earth may have existed, but decided to hide all traces of their existence from us, as it would help us to develop independently. So, the fact that we are the first raises the probability of a very large scale catastrophe in the future, like UFAI, or dangerous physical experiments, and reduces chances of mild x-risks such as pandemics or nuclear war. Another explanation is that any first civilisation exhausts all resources which are needed for a technological civilisation restart, such as oil, ores etc. But, in several million years most such resources will be filled again or replaced by new by tectonic movement.

 

2. We are not the first civilisation.

2.1. We didn't find any traces of a previous technological civilisation, yet based on what we know, there are very strong limitations for their existence. For example, every civilisation makes genetic marks, because it moves animals from one continent to another, just as humans brought dingos to Australia. It also must exhaust several important ores, create artefacts, and create new isotopes. We could be sure that we are the first tech civilisation on Earth in last 10 million years.

But, could we be sure for the past 100 million years? Maybe it was a very long time ago, like 60 million years ago (and killed dinosaurs). Carl Sagan argued that it could not have happened, because we should find traces mostly as exhausted oil reserves. The main counter argument here is that cephalisation, that is the evolutionary development of the brains, was not advanced enough 60 millions ago, to support general intelligence. Dinosaurian brains were very small. But, bird’s brains are more mass effective than mammalians. All these arguments in detail are presented in this excellent article by Brian Trent “Was there ever a dinosaurian civilisation”? 

The main x-risks here are that we will find dangerous artefacts from previous civilisation, such as weapons, nanobots, viruses, or AIs. And, if previous civilisations went extinct, it increases the chances that it is typical for civilisations to become extinct. It also means that there was some reason why an extinction occurred, and this killing force may be still active, and we could excavate it. If they existed recently, they were probably hominids, and if they were killed by a virus, it may also affect humans.

2.2. We killed them. Maya civilisation created writing independently, but Spaniards destroy their civilisation. The same is true for Neanderthals and Homo Florentines.

2.3. Myths about gods may be signs of such previous civilisation. Highly improbable.

2.4. They are still here, but they try not to intervene in human history. So, it is similar to Fermi’s Zoo solution.

2.5. They were a non-tech civilisation, and that is why we can’t find their remnants.

2.6 They may be still here, like dolphins and ants, but their intelligence is non-human and they don’t create tech.

2.7 Some groups of humans created advanced tech long before now, but prefer to hide it. Highly improbable as most tech requires large manufacturing and market.

2.8 Previous humanoid civilisation was killed by virus or prion, and our archaeological research could bring it back to life. One hypothesis of Neanderthal extinction is prionic infection because of cannibalism. The fact is - several hominid species went extinct in the last several million years.

 

3. Civilisations are rare

Millions of species existed on Earth, but only one was able to create technology. So, it is a rare event.Consequences: cyclic civilisations on earth are improbable. So the chances that we will be resurrected by another civilisation on Earth is small.

The chances that we will be able to reconstruct civilisation after a large scale catastrophe, are also small (as such catastrophes are atypical for civilisations and they quickly proceed to total annihilation or singularity).

It also means that technological intelligence is a difficult step in the evolutionary process, so it could be one of the solutions of the main Fermi paradox.

Safety of remains of previous civilisations (if any exist) depends on two things: the time distance from them and their level of intelligence. The greater the distance, the safer they are (as the biggest part of dangerous technology will be destructed by time or will not be dangerous to humans, like species specific viruses).

The risks also depend on the level of intelligence they reached: the higher intelligence the riskier. If anything like their remnants are ever found, strong caution is recommend.

For example, the most dangerous scenario for us will be one similar to the beginning of the book of V. Vinge “A Fire upon the deep.” We could find remnants of a very old, but very sophisticated civilisation, which will include unfriendly AI or its description, or hostile nanobots.

The most likely place for such artefacts to be preserved is on the Moon, in some cavities near the pole. It is the most stable and radiation shielded place near Earth.

I think that based on (no) evidence, estimation of the probability of past tech civilisation should be less than 1 per cent. While it is enough to think that they most likely don’t exist, it is not enough to completely ignore risk of their artefacts, which anyway is less than 0.1 per cent.

Meta: the main idea for this post came to me in a night dream, several years ago.

What degree of cousins are you and I? Estimates of Consanguinity to promote feelings of kinship and empathy

1 chaosmage 20 May 2015 05:10PM

Epistemic status: Wild guesswork based on half-understood studies from way outside my field. More food for thought than trustworthy information.

tl/dr: Estimates of familial relatedness between people should help promote empathy, so here's how to make them - and might this be useful for Effective Altruism?

The why

I don't know how it is for you, but for me, knowing I'm related to someone makes a specific emotional difference. Scenario: I'm at a big family-and-friends get-together, I meet a guy, we get along. (For clarity, let's assume no sexual tension.) And then we're told we're third cousins via some weird aunt. From the moment I'm told, I feel different towards him. Firm, forthcoming, obliging. Some kind of basic kinship emotion, I guess, noticeable when it shifts on these rare occasions but basically going on, deep down in System 1, every time that emailing a remote uncle feels different from emailing a similarly remote associate.

Meanwhile, my System 2 has heard that all humans are at least 50th degree cousins and likes to point out everyone I've ever had sex with was a cousin of some degree. That similarly remote associate where I don't have that kinship feeling - he's a relative too, just a more distant one. And when I notice that, I get a bit of that kinship feeling too...

With me so far? Here's my thesis: the two human feelings of kinship and empathy are closely connected, and to make one of them more salient is to increase the salience of the other.

I don't think this has been tested properly. A. J. Jacobs, who is running a huge family reunion event in New York this summer, said "some ambitious psychology professor needs to conduct a study about whether we deliver lower electrical shocks to people if we know we’re related" and I think he's exactly right.

Has anybody here not heard of circles of empathy? They're a concept invented by the very cool 19th century rationalist William Edward Hartpole Lecky in his "History of European Morals From Augustus to Charlemagne". Peter Singer summarizes it as follows:

Lecky wrote of human concern as an expanding circle which begins with the individual, then embraces the family and ‘soon the circle... includes first a class, then a nation, then a coalition of nations, then all humanity, and finally, its influence is felt in the dealings of man [sic] with the animal world’.

There's more to read about this in Peter Singer's "The Expanding Circle" or Steven Pinker's "The Better Angels of Our Nature", but what strikes me about it is contained in that single sentence: The expansion that is described tracks actual genetic relatedness, or Consanguinity. The list goes down a gradient of (expected) genetic relatedness. This makes the size of the circle of empathy seem to depend on a threshold of how related you need to be to someone in order to care about them.

(Note that Becky published his "History of European Morals" - with this inclusion of concern about animals - in 1869, i.e. only ten years after the publication of "On the Origin of Species". There was some animal rights legislation before Darwin, but animal rights as a movement only arose after we knew animals to be our relatives.)

On the other hand, those who would promote empathy have always relied on familial vocabulary, chiefly "brother" and "sister", to refer to people who evidently weren't actual brothers or sisters. Martin Luther King, Jesus, the Buddha, Mandela, Gandhi, they all do this. So maybe it works a bit. Maybe it helps trigger that emotional kinship response and that somehow helps people get along.

Now to see how these emotional responses would arise, we could discuss reciprocal altruism and gene-centered Darwinism and whatnot, but "The Selfish Gene" is required reading anyway and I assume you've done your homework. I'd like to instead go to the second part of my thesis, the one about increasing salience.

Recognizing you're related to somebody does something. (Especially if you have an incest fetish, of course.) I propose that whatever it does increases empathy. And empathy might not be a categorically good thing, but it comes pretty close, at least until you extend it to all food groups. So maybe we could increase empathy among people by pointing out their relatedness. And maybe we can do this more vividly, more strikingly than by simply saying "we're all descended from apes, so we're all related, duh" or by boring the non-nerd majority to death with talk of human genetic clustering and fixation indexes.

So I'd like to revisit that "brothers and sisters" thing from MLK and those other guys. Maybe they shouldn't have used figurative language. Maybe a more lasting feeling of kinship can be created by literal language: By telling people how related they are. Detailed ancestry information is being collected at various Wiki-like sites, but even assuming they'll grow and become less US-centric, they don't go back very far (except around very famous people) and what came before remains guesswork. So let's do some Fermi-ish estimates.

The how

The drop dead amazing Nature Article Modelling the recent common ancestry of all living humans is way too careful and scientific to put an exact number on how long ago the last common ancestor lived, unfortunately. But the mean date their simulations come up with is 1415 BC, which will be approximately 120 generations ago, so let's say really remote people like the Karitiana tribe are, at most, something like 125th degree cousins of all of us. So that's a useful upper bound for the degree of cousinhood between any two arbitrary humans, such as you and me.

The lower bound could be something like 3 - if you and I were that closely related, we'd share a great-great-grandparent and could probably ascertain rather than guess that. With fairly extensive genealogy, the lower bound might go up to around 5 - which is the level where you need to look at 64 ancestors for each of us who lived in the middle of the 19th century and failed to use Facebook. We'd find it hard to ascertain whether your great-great-great-great-grandmother Mary was identical to mine.

There are a lot of special cases where the lower bound can be higher. If both people involved know their family more than 3 generations were deep-rooted peasant folks from two distinct populations, the history books might tell them how many centuries further back are very unlikely to contain a common ancestor. (This will of course be much rarer among descendants of immigrants, like Americans, than it is for citizens of older or more rural countries.) If they're of different ethnicities, castes or classes that wouldn't normally date each other 80 years ago, the lower bound should probably go up a few more generations. If both people involved are Icelanders, they can just look up their last common ancestor in the comprehensive Icelandic family tree. But let's assume you and I don't have any of these special cases, and we're stuck with a lower bound of 3. Now between that and 125, how do we narrow it down?

Turns out the authors of that gorgeous Nature paper don't hand out access to their simulations to random dudes who just email them. So lets see how far we get on the hard way.

In a completely random mating model (where people do not tend to mate with people who happen to live near them, i.e. happen to be descendants of the same people), your number of ancestors doubles with every generation you go back, in a sort of ancestor tree that grows backwards. We're looking for the point where the two ancestor trees first meet. If we assume generations have homogenous lengths (which implies further simplifying assumptions like moms and dads are the same age) and further assume only people from within the same generation have kids with each other, cousins of the Nth degree have a common ancestor N+1 generations ago, and each has 2N+1 ancestors belonging to that generation.

This means that for you and me to be, say, 15th degree cousins, our two sets of 215+1=65536 ancestors have to have one person in common, some 480 years ago, assuming 30 years as mean parenthood age. Of course we each probably have less than 65536 unique ancestors due to... um... "reticulations".

But empirically, it seems that "a pair of modern Europeans living in neighboring populations share around 2–12 genetic common ancestors from the last 1,500 years" and even individuals from opposite ends of Europe will normally have common ancestors if you search back 3000 years (source). That isn't what you get from the simplistic model above - the numbers of ancestors it calculates exceed the world population less than 32 generations (about 800 years) ago. The empirical genetic data from this paper would indicate that it is likely the median first common ancestor between me and anybody in central Europe is somewhere like 1200 years (or 40 generations) ago and any two people anywhere in Europe would probably be at most 100th degree cousins.

Around 600 years ago is a good time to look at, because that's shortly before intercontinental travel started to intricately connect all regions of the world, including genetically. If most of your 600-years-ago ancestors lived outside Europe, you and I might still be <25 degrees cousins - maybe you have some ancestor who left for Europe 300 years ago, leaving siblings behind (your ancestors) and having kids in Europe (mine). Or vice versa. But that kind of thing is unlikely and since we're doing rough estimates I suggest we round that probability down to zero.

In genetic studies, no other continent is anywhere near as well-studied as Europe, so I guess we'll just have to roll with it and assume that other places are about the same as this paper found and the nice exponential drop-off with geographic distance that's the case in Europe is also the case elsewhere. America and Australia as continents of immigrants continue to be a special cases. But for two people with families from, say, West Africa, I'd be comfortable assuming that if they're from roughly the same large region (say around the Bight of Benin) they're probably something like 40th degree cousins and if not, they're still something like 100 degree cousins at least.

It gets only slightly more complicated if the set of ancestors you know - say your four grandparents - are a mix of descendants from different regions or continents. Just add the number of generations between you and them to your expected degree of cousinhood to everybody from that region or continent.

Needless to say these are all wild guesses. I'm basically hoping someone more qualified than me will see this and be horrified enough to go do the job properly.

Now I'm not an American but statistically you probably are, and you might be more interested in know how closely you're related to other Americans - your boss, your sexual partners, or Mel Gibson. The bad news is that as a member of a nation of relatively recent immigrants, and particularly if your ancestors didn't all come from different continents, you have a harder time estimating most recent common ancestors with people than most other people on Earth. The good news, however, is that the data collected at the large ancestry sites ancestry.com, FamilySearch.org, Geni.com and WikiTree.com are all growing fastest in the US-centric part of their "world trees".

For cousinhood between people whose ancestors seem to have lived on entirely seperate continents as far as anyone knows, I think we can only fall back on our upper bound of 125 degrees of cousinhood. Things get fuzzy so far back, the world population was much smaller, and the population of those who have descendants living today is smaller still. Shared ancestry within any particular generation remains unlikely, but over the centuries and millenia, between trade (particularly in slaves), the various empires and the mass rapes of warfare, genes did get mixed around. Again, see that spectacular Nature paper if you still haven't.

Side note: The most recent common ancestor of two arbitrarily chosen people on different continents is likely to be someone who had kids on different continents. So it is probably a very rich person, a sailor or a soldier, i.e. a male. In general, the number of unique males in anybody's ancestor tree will likely be much smaller than the number of unique females. I expect the difference will be sharper in most recent common ancestors of humans from different continents, because women have shorter fertility windows inside which to travel intercontinentally and don't seem to have moved nearly as much as men except as slaves.

The point of all this is simple. Now you can look at somebody and figure she's not only your cousin, you even have a guess as to the degree of cousin she is. I like to do that when I'm angry with people, because for me, it makes a distinct emotional difference. Maybe try if that works for you too.

Relation to the care allocation problem

I suspect this cousinhood thing could be a fairly principled solution to the problem of how to allocate caring between humans and animals, which Yvain/Scott laid out in a recent SSC post. Why not go by actual (known or estimated) blood relations, and privilege closer relatives over more distant ones?

Our last common ancestor with chimps was something like 5 to 6 million years ago, so our ancestor trees merge about 250000 (human) generations ago, making chimps something like quarter-million-degrees-cousins of all of us. Generations get a lot shorter further back, so our last common ancestor with cattle and dogs, about 92 million years ago, may be 30 million generations ago. Birds would be much more distant, our last common ancestor with them was around 310 million years ago, and so forth. (Richard Dawkins The Ancestor's Tale has much more on this.) For me, this maps rather nicely onto my intuitive prejudices as to how much I should care about which creatures. It fails to map my caring for plants far more than I care for bacteria, but EA has nothing to improve on in that department.

If EA has to have impartiality in the sense that your neighbor can't be more important to you than a tribesman in Mongolia, this isn't EA. Quoth Yvain:

allowing starving Third World people into the circle of concern totally pushes out most First World charities like art museums and school music programs and holiday food drives. This is a scary discovery and most people shy away from it. Effective altruists are the people who are selected for not having shied away from it.

So anybody trying to grow EA might want to make that step easier. Maybe a "closeness multiplier" on units of caring works better than a series of unprincipled exceptions, and still gets across the idea that units of caring are to be distributed between everybody (or everybody's QALYs), if unevenly. And then to become more impartial would be to have that multiplier approach 1.

And if that were the case, my personal preference for how to design that multiplier would be that it shouldn't rely on arbitrary constructs like citizenships. Maybe if EAs want to find a principled solution to the care allocation problem, consanguinity should be one of the options.

Resolving the Fermi Paradox: New Directions

12 jacob_cannell 18 April 2015 06:00AM

Our sun appears to be a typical star: unremarkable in age, composition, galactic orbit, or even in its possession of many planets.  Billions of other stars in the milky way have similar general parameters and orbits that place them in the galactic habitable zone.  Extrapolations of recent expolanet surveys reveal that most stars have planets, removing yet another potential unique dimension for a great filter in the past.  

According to Google, there are 20 billion earth like planets in the Galaxy.

A paradox indicates a flaw in our reasoning or our knowledge, which upon resolution, may cause some large update in our beliefs.

Ideally we could resolve this through massive multiscale monte carlo computer simulations to approximate Solonomoff Induction on our current observational data.  If we survive and create superintelligence, we will probably do just that.

In the meantime, we are limited to constrained simulations, fermi estimates, and other shortcuts to approximate the ideal bayesian inference.

The Past

While there is still obvious uncertainty concerning the likelihood of the series of transitions along the path from the formation of an earth-like planet around a sol-like star up to an early tech civilization, the general direction of the recent evidence flow favours a strong Mediocrity Principle.

Here are a few highlight developments from the last few decades relating to an early filter:

  1. The time window between formation of earth and earliest life has been narrowed to a brief interval.  Panspermia has also gained ground, with some recent complexity arguments favoring a common origin of life at 9 billion yrs ago.[1]
  2. Discovery of various extremophiles indicate life is robust to a wider range of environments than the norm on earth today.
  3. Advances in neuroscience and studies of animal intelligence lead to the conclusion that the human brain is not nearly as unique as once thought.  It is just an ordinary scaled up primate brain, with a cortex enlarged to 4x the size of a chimpanzee.  Elephants and some cetaceans have similar cortical neuron counts to the chimpanzee, and demonstrate similar or greater levels of intelligence in terms of rituals, problem solving, tool use, communication, and even understanding rudimentary human language.  Elephants, cetaceans, and primates are widely separated lineages, indicating robustness and inevitability in the evolution of intelligence.

So, if there is a filter, it probably lies in the future (or at least the new evidence tilts us in that direction - but see this reply for an argument for an early filter).

The Future(s)

When modelling the future development of civilization, we must recognize that the future is a vast cloud of uncertainty compared to the past.  The best approach is to focus on the most key general features of future postbiological civilizations, categorize the full space of models, and then update on our observations to determine what ranges of the parameter space are excluded and which regions remain open.

An abridged taxonomy of future civilization trajectories :

Collapse/Extinction:

Civilization is wiped out due to an existential catastrophe that sterilizes the planet sufficient enough to kill most large multicellular organisms, essentially resetting the evolutionary clock by a billion years.  Given the potential dangers of nanotech/AI/nuclear weapons - and then aliens, I believe this possibility is significant - ie in the 1% to 50% range.

Biological/Mixed Civilization:

This is the old-skool sci-fi scenario.  Humans or our biological descendants expand into space.  AI is developed but limited to human intelligence, like CP30.  No or limited uploading.

This leads eventually to slow colonization, terraforming, perhaps eventually dyson spheres etc.

This scenario is almost not worth mentioning: prior < 1%.  Unfortunately SETI in current form is till predicated on a world model that assigns a high prior to these futures.

PostBiological Warm-tech AI Civilization:

This is Kurzweil/Moravec's sci-fi scenario.  Humans become postbiological, merging with AI through uploading.  We become a computational civilization that then spreads out some fraction of the speed of light to turn the galaxy into computronium.  This particular scenario is based on the assumption that energy is a key constraint, and that civilizations are essentially stellavores which harvest the energy of stars.

One of the very few reasonable assumptions we can make about any superintelligent postbiological civilization is that higher intelligence involves increased computational efficiency.  Advanced civs will upgrade into physical configurations that maximize computation capabilities given the local resources.

Thus to understand the physical form of future civs, we need to understand the physical limits of computation.

One key constraint is the Landauer Limit, which states that the erasure (or cloning) of one bit of information requires a minimum of kTln2 joules.  At room temperature (293 K), this corresponds to a minimum of 0.017 eV to erase one bit.  Minimum is however the keyword here, as according to the principle, the probability of the erasure succeeding is only 50% at the limit.  Reliable erasure requires some multiple of the minimal expenditure - a reasonable estimate being about 100kT or 1eV as the minimum for bit erasures at today's levels of reliability.

Now, the second key consideration is that Landauer's Limit does not include the cost of interconnect, which is already now dominating the energy cost in modern computing.  Just moving bits around dissipates energy.

Moore's Law is approaching its asymptotic end in a decade or so due to these hard physical energy constraints and the related miniaturization limits.

I assign a prior to the warm-tech scenario that is about the same as my estimate of the probability that the more advanced cold-tech (reversible quantum computing, described next) is impossible: < 10%.

From Warm-tech to Cold-tech

There is a way forward to vastly increased energy efficiency, but it requires reversible computing (to increase the ratio of computations per bit erasures), and full superconducting to reduce the interconnect loss down to near zero.

The path to enormously more powerful computational systems necessarily involves transitioning to very low temperatures, and the lower the better, for several key reasons:

  1. There is the obvious immediate gain that one gets from lowering the cost of bit erasures: a bit erasure at room temperature costs 100 times more than a bit erasure at the cosmic background temperature, and a hundred thousand times more than an erasure at 0.01K (the current achievable limit for large objects)
  2. Low temperatures are required for most superconducting materials regardless.
  3. The delicate coherence required for practical quantum computation requires or works best at ultra low temperatures.
At a more abstract level, the essence of computation is precise control over the physical configurations of a device as it undergoes complex state transitions.  Noise/entropy is the enemy of control, and temperature is a form of noise.  

Assuming large scale quantum computing is possible, then the ultimate computer is thus a reversible massively entangled quantum device operating at absolute zero.  Unfortunately, such a device would be delicate to a degree that is hard to imagine - even a single misplaced high energy particle could cause enormous damage.

In this model, advanced computational civilization would take the form of a compact body (anywhere from asteroid to planet size) that employs layers of sophisticated shielding to deflect as much of the incoming particle flux as possible.  The ideal environment for such a device is as far away from hot stars as one can possibly go, and the farther the better.  The extreme energy efficiency of advanced low temperature reversible/quantum computing implies that energy is not a constraint.  These advanced civilizations could probably power themselves using fusion reactors for millions, if not billions, of years.

Stellar Escape Trajectories

For a cold-tech civilization, one interesting long term strategy involves escaping the local star's orbit to reach the colder interstellar medium, and eventually the intergalactic medium.

If we assume that these future civs have long planning horizons (reasonable), we can consider this an investment that has an initial cost in terms of the energy required to achieve escape velocity and a return measured in the future integral of computation gained over the trajectory due to increased energy efficiency.  Expendable boost mass in the system can be used, and domino chains of complex chaotic gravitational assist maneuvers computed by deep simulations may offer a route to expel large objects using reasonable amounts of energy.[3]

The Great Game 

Given the constraints of known physics (ie no FTL), it appears that the computational brains housing more advanced cold-tech civs will be incredibly vulnerable to hostile aliens.  A relativistic kill vehicle is a simple technology that permits little avenue for direct defense.  The only strong defense is stealth.

Although the utility functions and ethics of future civs are highly speculative, we can observe that a very large space of utility functions lead to similar convergent instrumental goals involving control over one's immediate future light cone.  If we assume that some civs are essentially selfish, then the dynamics suggest successful strategies will involve stealth and deception to avoid detection combined with deep simulation sleuthing to discover potential alien civs and their locations.

If two civs both discover each other's locations around the same time, then MAD (mutually assured destruction) dynamics takeover and cooperation has stronger benefits.  The vast distances involve suggest that one sided discoveries are more likely.

Spheres of Influence

A new civ, upon achieving the early postbiological stage of development (earth in say 2050?), should be able to resolve the general answer to the fermi paradox using advanced deep simulation alone - long before any probes would reach distant stars.  Assuming that the answer is "lots of aliens", then further simulations could be used to estimate the relative likelihood of elder civs interacting with the past lightcone.  

The first few civilizations would presumably realize that the galaxy is more likely to be mostly colonized, in which case the ideal strategy probably involves expansion of actuator type devices (probes, construction machines) into nearby systems combined with construction and expulsion of advanced stealthed coldtech brains out into the void.  On the other hand, the very nature of the stealth strategy suggests that it may be hard to confidently determine how colonized the galaxy is. 

For civilizations appearing later, the situation is more complex.  The younger a civ estimates itself to be in the cosmic order, the more likely it becomes that it's local system has already come under an alien influence.

From the perspective of an elder civ, an alien planet at a pre-singularity level of development has no immediate value.  Raw materials are plentiful - and most of the baryonic mass appears to be interstellar and free floating.  The tiny relative value of any raw materials on a biological world are probably outweighed - in the long run - by the potential future value of information trade with the resulting mature civ.

Each biological world - or seed of a future elder civ - although perhaps similar in abstract, is unique in details.  Each such world is valuable in the potential unique knowledge/insights it may eventually generate - directly or indirectly.  From a pure instrumental rational standpoint, there is some value in preserving biological worlds to increase general knowledge of civ development trajectories.

However, there could exist cases where the elder civ may wish to intervene.  For example, if deep simulations predict that the younger world will probably develop into something unfriendly - like an aggressive selfish/unfriendly replicator - then small pertubations in the natural trajectory could be called for.  In short the elder civ may have reasons to occasionally 'play god'.

On the other hand, any intervention itself would leave a detectable signature or trace in the historical trajectory which in turn could be detected by another rival or enemy civ!  In the best case these clues would only reveal the presence of an alien influence.  In the worst case they could reveal information concerning the intervening elder civ's home system and the likely locations of its key assets.

Around 70,000 years ago, we had a close encounter with Scholz's star, which passed with 0.8 light years of the sun (within the oort cloud).  If the galaxy is well colonized, flybys such as this have potentially interesting implications  (that particular flyby corresponds to the estimated time of the Toba super-eruption, for example).

Conditioning on our Observational Data

Over the last few decades SETI has searched a small portion of the parameter space covering potential alien civs.  

SETI's original main focus concerned the detection of large permanent alien radio beacons.  We can reasonably rule out models that predict advanced civs constructing high energy omnidirectional radio beacons.

At this point we can also mostly rule out large hot-tech civilizations (energy constrained civilizations) that harvest most of the energy from stars.

Obviously detecting cold-tech civilizations is considerably more difficult, and perhaps close to impossible if advanced stealth is a convergent strategy.

However, determining whether the galaxy as a whole is colonized by advanced stealth civs is a much easier problem.  In fact, one way or another the evidence is already right in front of us.  We now know that most of the mass in the galaxy is dark rather than light.  I have assumed that coldtech still involves baryonic matter and normal physics, but of course there is also the possibility that non-baryonic matter could be used for computation.  Either way, the dark matter situation is favorable.  Focusing on normal baryonic matter, the ratio of dark/cold to light/hot is still large - very favorable for colonization.

Observational Selection Effects

All advanced civs will have strong instrumental reasons to employ deep simulations to understand and model developmental trajectories for the galaxy as a whole and for civilizations in particular.  A very likely consequence is the production of large numbers of simulated conscious observers, ala the Simulation Argument.  Universes with the more advanced low temperature reversible/quantum computing civilizations will tend to produce many more simulated observer moments and are thus intrinsically more likely than one would otherwise expect - perhaps massively so.

 

Rogue Planets


If the galaxy is already colonized by stealthed coldtech civs, then one prediction is that some fraction of the stellar mass has been artificially ejected.  Some recent observations actually point - at least weakly - in this direction.

From "Nomads of The Galaxy"[4]

We estimate that there may be up to ∼ 10^5 compact objects in the mass range 10^−8 to 10^−2M⊙
per main sequence star that are unbound to a host star in the Galaxy. We refer to these objects as
nomads; in the literature a subset of these are sometimes called free-floating or rogue planets.

Although the error range is still large, it appears that free floating planets outnumber planets bound to stars, and perhaps by a rather large margin.

Assuming the galaxy is colonized:  It could be that rogue planets form naturally outside of stars and then are colonized.  It could be they form around stars and then are ejected naturally (and colonized).  Artificial ejection - even if true - may be a rare event.  Or not.  But at least a few of these options could potentially be differentiated with future observations - for example if we find an interesting discrepancy in the rogue planet distribution predicted by simulations (which obviously do not yet include aliens!) and actual observations.

Also: if rogue planets outnumber stars by a large margin, then it follows that rogue planet flybys are more common in proportion.

 

Conclusion

SETI to date allows us to exclude some regions of the parameter space for alien civs, but the regions excluded correspond to low prior probability models anyway, based on the postbiological perspective on the future of life.  The most interesting regions of the parameter space probably involve advanced stealthy aliens in the form of small compact cold objects floating in the interstellar medium.

The upcoming WFIST telescope should shed more light on dark matter and enhance our microlensing detection abilities significantly.  Sadly, it's planned launch date isn't until 2024.  Space development is slow.

 

Make your own cost-effectiveness Fermi estimates for one-off problems

9 owencb 11 December 2014 12:00PM

In some recent work (particularly this article) I built models for estimating the cost effectiveness of work on problems when we don’t know how hard those problems are. The estimates they produce aren’t perfect, but they can get us started where it’s otherwise hard to make comparisons.


Now I want to know: what can we use this technique on? I have a couple of applications I am working on, but I’m keen to see what estimates other people produce.


There are complicated versions of the model which account for more factors, but we can start with a simple version. This is a tool for initial Fermi calculations: it’s relatively easy to use but should get us around the right order of magnitude. That can be very useful, and we can build more detailed models for the most promising opportunities.


The model is given by:

 

This expresses the expected benefit of adding another unit of resources to solving the problem. You can denominate the resources in dollars, researcher-years, or another convenient unit. To use this formula we need to estimate four variables:


  • R(0) denotes the current resources going towards the problem each year. Whatever units you measure R(0) in, those are the units we’ll get an estimate for the benefit of. So if R(0) is measured in researcher-years, the formula will tell us the expected benefit of adding a researcher year.

    • You want to count all of the resources going towards the problem. That includes the labour of those who work on it in their spare time, and some weighting for the talent of the people working in the area (if you doubled the budget going to an area, you couldn’t get twice as many people who are just as good; ideally we’d use an elasticity here).

    • Some resources may be aimed at something other than your problem, but be tangentially useful. We should count some fraction of those, according to how much resources devoted entirely to the problem they seem equivalent to.

  • B is the annual benefit that we’d get from a solution to the problem. You can measure this in its own units, but whatever you use here will be the units of value that come out in the cost-effectiveness estimate.

  • p and y/z are parameters that we will estimate together. p is the probability of getting a solution by the time y resources have been dedicated to the problem, if z resources have been dedicated so far. Note that we only need the ratio y/z, so we can estimate this directly.

    • Although y/z is hard to estimate, we will take a (natural) logarithm of it, so don’t worry too much about making this term precise.

    • I think it will often be best to use middling values of p, perhaps between 0.2 and 0.8.

And that’s it.


Example: How valuable is extra research into nuclear fusion? Assume:

  • R(0) = $5 billion (after a quick google turns up $1.5B for current spending, and adjusting upwards to account for non-financial inputs);

  • B = $1000 billion (guesswork, a bit over 1% of the world economy; a fraction of the current energy sector);

  • There’s a 50% chance of success (p = 0.5) by the time we’ve spent 100 times as many resources as today (log(y/z) = log(100) = 4.6).


Putting these together would give an expected societal benefit of (0.5*$1000B)/(5B*4.6) = $22 for every dollar spent. This is high enough to suggest that we may be significantly under-investing in fusion, and that a more careful calculation (with better-researched numbers!) might be justified.

Caveats

To get the simple formula, the model made a number of assumptions. Since we’re just using it to get rough numbers, it’s okay if we don’t fit these assumptions exactly, but if they’re totally off then the model may be inappropriate. One restriction in particular I’d want to bear in mind:


  • It should be plausible that we could solve the problem in the next decade or two.


It’s okay if this is unlikely, but I’d want to change the model if I were estimating the value of e.g. trying to colonise the stars.

Request for applications

So -- what would you like to apply this method to? What answers do you get?


To help structure the comment thread, I suggest attempting only one problem in each  comment. Include the value of p, and the units of R(0) and units of B that you’d like to use. Then you can give your estimates for R(0), B, and y/z as a comment reply, and so can anyone else who wants to give estimates for the same thing.


I’ve also set up a google spreadsheet where we can enter estimates for the questions people propose. For the time being anyone can edit this.


Have fun!

Linked decisions an a "nice" solution for the Fermi paradox

2 Beluga 07 December 2014 02:58PM

One of the more speculative solutions of the Fermi paradox is that all civilizations decide to stay home, thereby meta-cause other civilizations to stay home too, and thus allow the Fermi paradox to have a nice solution. (I remember reading this idea in Paul Almond’s writings about evidential decision theory, which unfortunately seem no longer available online.) The plausibility of this argument is definitely questionable. It requires a very high degree of goal convergence both within and among different civilizations. Let us grant this convergence and assume that, indeed, most civilizations arrive at the same decision and that they make their decision knowing this. One paradoxical implication then is: If a civilization decides to attempt space colonization, they are virtually guaranteed to face unexpected difficulties (for otherwise space would already be colonized, unless they are the first civilization in their neighborhood attempting space colonization). If, on the other hand, everyone decides to stay home, there is no reason for thinking that there would be any unexpected difficulties if one tried. Space colonization can either be easy, or you can try it, but not both.

Can the basic idea behind the argument be formalized? Consider the following game: There are N>>1 players. Each player is offered to push a button in turn. Pushing the button yields a reward R>0 with probability p and a punishment P<0 otherwise. (R corresponds to successful space colonization while P corresponds to a failed colonization attempt.) Not pushing the button gives zero utility. If a player pushes the button and receives R, the game is immediately aborted, while the game continues if a player receives P. Players do not know how many other players were offered to push the button before them, they only know that no player before them received R. Players also don’t know p. Instead, they have a probability distribution u(p) over possible values of p. (u(p)>=0 and the integral of u(p) from 0 to 1 is given by int_{0}^{1}u(p)dp=1.) We also assume that the decisions of the different players are perfectly linked.

Naively, it seems that players simply have an effective success probability p_eff,1=int_{0}^{1}p*u(p)dp and they should push the button iff p_eff,1*R+(1-p_eff,1)*P>0. Indeed, if players decide not to push the button they should expect that pushing the button would have given them R with probability p_eff,1. The situation becomes more complicated if a player decides to push the button. If a player pushes the button, they know that all players before them have also pushed the button and have received P. Before taking this knowledge into account, players are completely ignorant about the number i of players who were offered to push the button before them, and have to assign each number i from 0 to N-1 the same probability 1/N. Taking into account that all players before them have received P, the variables i and p become correlated: the larger i, the higher the probability of a small value of p. Formally, the joint probability distribution w(i,p) for the two variables is, according to Bayes’ theorem, given by w(i,p)=c*u(p)*(1-p)^i, where c is a normalization constant. The marginal distribution w(p) is given by w(p)=sum_{i=0}^{N-1}w(i,p). Using N>>1, we find w(p)=c*u(p)/p. The normalization constant is thus c=[int_{0}^{1}u(p)/p*dp]^{-1}. Finally, we find that the effective success probability taking the linkage of decisions into account is given by

p_eff,2 = int_{0}^{1}p*w(p)dp = c = [int_{0}^{1}u(p)/p*dp]^{-1} .

This is the expected chance of success if players decide to push the button. Players should push the button iff p_eff,2*R+(1-p_eff,2)*P>0. If follows from convexity of the function x->1/x (for positive x) that p_eff,2<=p_eff,1. So by deciding to push the button, players decrease their expected success probability from p_eff,1 to p_eff,2; they cannot both push the button and have the unaltered success probability p_eff,1. Linked decisions can explain why no one pushes the button if p_eff,2*R+(1-p_eff,2)*P<0, even though we might have p_eff,1*R+(1-p_eff,1)*P>0 and pushing the button naively seems to have positive expected utility.

It is also worth noting that if u(0)>0, the integral int_{0}^{1}u(p)/p*dp diverges such that we have p_eff,2=0. This means that given perfectly linked decisions and a sufficiently large number of players N>>1, players should never push the button if their distribution u(p) satisfies u(0)>0, irrespective of the ratio of R and P. This is due to an observer selection effect: If a player decides to push the button, then the fact that they are even offered to push the button is most likely due to p being very small and thus a lot of players being offered to push the button.

In order to greatly reduce X-risk, design self-replicating spacecraft without AGI

1 chaosmage 20 September 2014 08:25PM

tl/dr: If we'd build a working self-replicating spacecraft, that'd prove we're past the Great Filter. Therefore, certainty we can do that would eliminate much existential risk. It is a potentially highly visible project that gives publicity to reasons not to include AGI. Therefore, serious design work on a self-replicating spacecraft should have a high priority.

I'm assuming you've read Stuart_Armstrong's excellent recent article on the Great Filter. In the discussion thread for that, RussellThor observed:

if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us.

If that is obvious to you, skip to the next subheading.

The evolution from intelligent spacefaring species to producer of self-replicating spacecraft (henceforth SRS, used in the plural) is inevitable, if SRS are possible. This is simply because the matter and negentropy available in the wider universe is a staggeringly vast resource of staggering value. Even species who are unlikely to ever visit and colonize other stars in the form that evolution gave them (this includes us) can make use of these resources. For example, if we could build on (or out of) empty planets supercomputers that receive computation tasks by laser beam and output results the same way, we would be economically compelled to do so simply because those supercomputers could handle computational tasks that no computer on Earth could complete in less than the time it takes that laser beam to travel forth and back. That supercomputer would not need to run even a weak AI to be worth more than the cost of sending the probe that builds it.

Without a doubt there are countless more possible uses for these, shall we say, exoresources. If Dyson bubbles or mind uploads or multistellar hypertelescopes or terraforming are possible, each of these alone create another huge incentive to build SRS. Even mere self-replicating refineries that break up planets into more readily accessible resources for future generations to draw from would be an excellent investment. But the obvious existence of this supercomputer incentive is already reason enough to do it.

All the Great Filter debate boils down to the question of how improbable our existence really is. If we're probable, many intelligent species capable of very basic space travel should exist. If we're not, they shouldn't. We know there doesn't appear to be any species inside a large fraction of our light cone so capable of space travel it has sent out SRS. So the only way we could be probable is if there's a Great Filter ahead of us, stopping us (and everyone else capable of basic space travel) from becoming the kind of species that sends out SRS. If we became such a species, we'd know we're past the Filter and while we still wouldn't know how improbable which of the conditions that allowed for our existence was, we'd know that when putting them all together, they multiply into some very small probability of our existence, and a very small probability of any comparable species existing in a large section of our light cone.

LW users generally seem to think SRS are doable and that means we're quite improbable, i.e. the Filter is behind us. But lots of people are less sure, and even more people haven't thought about it. The original formulation of the Drake equation included a lifespan of civilizations partly to account for the intuition that a Great Filter type event could be coming in the future. We could be more sure than we are now, and make a lot of people much more sure than they are now, about our position in reference to that Filter. And that'd have some interesting consequences.

How knowing we're past the Great Filter reduces X-risk

The single largest X-risk we've successfully eliminated is the impact of an asteroid large enough to destroy us entirely. And we didn't do that by moving any asteroids; we simply mapped all of the big ones. We now know there's no asteroid that is both large enough to kill us off and coming soon enough that we can't do anything about it. Hindsight bias tells us this was never a big threat - but look ten years back and you'll find The Big Asteroid on every list of global catastrophic risks, usually near the top. We eliminated that risk simply by observation and deduction, by finding out it did not exist rather than removing it.

Obviously a working SRS that gives humanity outposts in other solar systems would reduce most types of X-risk. But even just knowing we could build one should decrease confidence in the ability of X-risks to take us out entirely. After all, if as Bostrom argues, the possibility that the Filter is ahead of is increases the probability of any X-risk, the knowledge that it is not ahead of us has to be evidence against all of them except those that could kill a Type 3 civilization. And if, as Bostrom says in that same paper, finding life elsewhere that is closer to our stage of development is worse news than finding life further from it, to increase the distance between us and either type of life decreases the badness of the existence of either.

Of course we'd only be certain if we had actually built and sent such a spacecraft. But in order to gain confidence we're past the filter, and to gain a greater lead to life possibly discovered elsewhere, a design that is agreed to be workable would go most of the way. If it is clear enough that someone with enough capital could claim incredible gains by doing that, we can be sure enough someone eventually (e.g. Elon Musk after SpaceX's IPO around 2035) will do that, giving high confidence we've passed the filter.

I'm not sure what would happen if we could say (with more confidence than currently) that we're probably the species that's furthest ahead at least in this galaxy. But if that's true, I don't just want to believe it, I want everyone else to believe it too, because it seems like a fairly important fact. And an SRS design would help do that.

We'd be more sure we're becoming a Type 3 civilization, so we should then begin to think about what type of risk could kill that, and UFAI would probably be more pronounced on that list than it is on the current geocentric ones.

What if we find out SRS are impossible at our pre-AGI level of technology? We still wouldn't know if an AI could do it. But even knowing our own inability would be very useful information, especially about the dangerousness of vatrious types of X-risk.

How easily this X-risk reducing knowledge can be attained

Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design of 1980. But that paper, while impressively detailed and a great read, glosses over the exact computing abilities such a system would need, does not mention hardening against interstellar radiation, assumes fusion drives and probably has a bunch of other problems that I'm not qualified to discover. I haven't looked at all the papers that cite it (yet), but the ones I've seen seem to agree self-replicating spacecraft are plausible. Sandberg has some good research questions that I agree need to be answered, but never seems to waver from his assumption that SRS are basically possible, although he's aware of the gaps in knowledge that preclude such an assumption from being safe.

There are certainly some questions that I'm not sure we can answer. For example:

  1. Can we build fission-powered spacecraft (let alone more speculative designs) that will survive the interstellar environment for decades or centuries?
  2. How can we be certain to avoid mutations that grow outside of our control, and eventually devour Earth?
  3. Can communication between SRS and colonies, especially software updates, be made secure enough?
  4. Can a finite number of probe designs (to be included on any of them) provide a vehicle for every type of journey we'd want the SRS network to make?
  5. Can a fiinite number of colony designs provide a blueprint for every source of matter and negentropy we'd want to develop?
  6. What is the ethical way to treat any life the SRS network might encounter?

But all of these except for the last one, and Sandberg's questions, are engineering questions and those tend to be answerable. If not, remember, we don't need to have a functioning SRS to manage X-risk, any reduction of uncertainty around their feasibility already helps. And again, the only design I could find that gives any detail at all is from a single guy writing in 1980. If we merely do better than he did (find or rule out a few of the remaining obstacles), we already help ascertain our level of X-risk. Compare the asteroid detection analogy: We couldn't be certain that we wouldn't be hit by an asteroid until we looked at all of them, but getting started with part of the search space was a very valuable thing to do anyway.

Freitag and others use to assume SRS should be run by some type of AGI. Sandberg says SRS without AGI, with what he calls "lower order intelligence", "might be adequate". I disagree with both assessments, and with Sandberg's giving this question less priority than, say, study of mass drivers. Given the issues of AGI safety, a probe that works without AGI should be distinctly preferable. And (unlike an intelligent one) its computational components can be designed right now, down to the decision tree it should follow. While at it, and in order to use the publicity such a project might generate, give an argument for this design choice that highlights the AGI safety issues. A scenario where a self-replicating computer planet out there decides for itself should serve to highlight the dangers of AGI far more viscerally than conventional "self-aware desktop box" scenarios.

If we're not looking for an optimal design, but the bare minimum necessary to know we're past the filter, that gives us somewhat relaxed design constraints. This probe wouldn't necessarily need to travel at a significant fraction of light speed, and its first generation wouldn't need to be capable of journeys beyond, say, five parsec. It does have to be capable of interstellar travel, and of progressing to intergalactic travel at some point, say when it finds all nearby star systems to contain copies of itself. A non-interstellar probe fit to begin the self-replication process on a planet like Jupiter, refining resources and building launch facilities there, would be a necessary first step.

Quickly passing through the great filter

10 James_Miller 06 July 2014 06:50PM

To quickly escape the great filter should we flood our galaxy with radio signals?  While communicating with fellow humans we already send out massive amounts of information that an alien civilization could eventually pickup, but should we engage in positive SETI?  Or, if you fear the attention of dangerous aliens, should we set up powerful long-lived solar or nuclear powered automated radio transmitters in the desert and in space that stay silent so long as they receive a yearly signal from us, but then if they fail to get the no-go signal because our civilization has fallen, continuously transmit our dead voice to the stars?  If we do destroy ourselves it would be an act of astronomical altruism to warn other civilizations of our fate especially if we broadcasted news stories from just before our demise, e.g. physicists excited about a new high energy experiment.  

continue reading »

Can we make Drake-like Fermi estimates of expected distance to the next planet with primitive, sentient or self-improving life?

0 chaosmage 10 July 2013 01:34AM

I expect everyone here has an opinion on the Drake Equation. (Comment if I'm wrong.) And that's because it is an easy story to remember and spread. Never mind its glaring inadequacy or the symbols it uses: it gives you a number of alien civilizations and somehow that sticks. I'd like to see if a science meme with similar properties could be created to carry a transhumanist payload. So. Could you convince a random person of the following three points if you wanted to?

  • We're getting increasingly confident estimates on the number and distribution of planets in our galaxy.
  • The other factors in the Drake equation have been discussed a lot - they remain guesses till we find something, but at least they aren't going to change a lot until we do.
  • So we should be able to estimate, very roughly and while mumbling about priors, an expected distance to the next planetary body with primitive life, with sentient life or with self-improving life (i.e. something like AIs that can exponentially grow that biosphere's cognitive capacity).

I think you could. And if you do, and if you can give a number of light-years, regardless of how much you emphasize the low confidence, aliens will suddenly seem more real to that random person. And so will, if not full transhumanism, at least some vague notion that intelligence must grow much like life does. I think that could reach a lot of people.

(If anybody complains that the expectation of some Singularity-like development is ideological: no, it is a reasonable guess based on the current evidence, much like Drake's expectation of every technological civilization's eventual self-destruction was reasonable in his Cold War era.)

The brain I'm typing this from knows too little math or astronomy to do this locally, so I'm throwing out the idea. Anyone care to play with this?

[LINK] On the unlikelihood of intelligent life

7 NancyLebovitz 27 March 2013 05:29AM

"The Planet-of-the-Apes Hypothesis" Revisited --Will Intelligence be a Constant in the Universe?

If intelligence is good for every environment, we would see a trend in the encephalization quotient among all organisms as a function of time. The data does not show that. The evidence on Earth points to exactly the opposite conclusion. Earth had independent experiments in evolution thanks to continental drift. New Zealand, Madagascar, India, South America... half a dozen experiments over 10, 20, 50, even 100 million years of independent evolution did not produce anything that was more human-like than when it started. So it's a silly idea to think that species will evolve toward us.

[Link] A superintelligent solution to the Fermi paradox

-1 Will_Newsome 30 May 2012 08:08PM

Here.

Long story short, it's an attempt to justify the planetarium hypothesis as a solution to the Fermi paradox. The first half is a discussion of how it and things like it are relevant to the intended purview of the blog, and the second half is the meat of the post. You'll probably want to just eat the meat, which I think is relevant to the interests of many LessWrong folk.

The blog is Computational Theology. It's new. I'll be the primary poster, but others are sought. I'll likely introduce the blog and more completely describe it in its own discussion post when more posts are up, hopefully including a few from people besides me, and when the archive will give a more informative indication of what to expect from the blog. Despite theism's suspect reputation here at LessWrong I suspect many of the future posts will be of interest to this audience anyway, especially for those of you who take interest in discussion of the singularity. The blog will even occasionally touch on rationality proper. So you might want to store the fact of the blog's existence somewhere deep in the back of your head. A link to the blog's main page can be found on my LessWrong user page if you forget the url.

I'd appreciate it if comments about the substance of the post were made on the blog post itself, but if you want to discuss the content here on LessWrong then that's okay too. Any meta-level comments about presentation, typos, or the post's relevance to LessWrong, should probably be put as comments on this discussion post. Thanks all!

Evidence For Simulation

14 TruePath 27 January 2012 11:07PM

The recent article on overcomingbias suggesting the Fermi paradox might be evidence our universe is indeed a simulation prompted me to wonder how one would go about gathering evidence for or against the hypothesis that we are living in a simulation.  The Fermi paradox isn't very good evidence but there are much more promising places to look for this kind of evidence.  Of course there is no sure fire way to learn that one isn't in a simulation, nothing prevents a simulation from being able to perfectly simulate a non-simulation universe, but there are certainly features of the universe that seem more likely if the universe was simulated and their presence or absence thus gives us evidence about whether we are in a simulation.

 

In particular, the strategy suggested here is to consider the kind of fingerprints we might leave if we were writing a massive simulation.  Of course the simulating creatures/processes may not labor under the same kind of restrictions we do in writing simulations (their laws of physics might support fundamentally different computational devices and any intelligence behind such a simulation might be totally alien).  However, it's certainly reasonable to think we might be simulated by creatures like us so it's worth checking for the kinds of fingerprints we might leave in a simulation.

 

Computational Fingerprints

Simulations we write face several limitations on the computational power they can bring to bear on the problem and these limitations give rise to mitigation strategies we might observe in our own universe.  These limitations include the following:

  1. Lack of access to non-computable oracles (except perhaps physical randomness).

    While theoretically nothing prevents the laws of physics from providing non-computable oracles, e.g., some experiment one could perform that discerns whether a given Turing machine halts (halting problem = 0') all indications suggest our universe does not provide such oracles.  Thus our simulations are limited to modeling computable behavior.  We would have no way to simulate a universe that had non-computable fundamental laws of physics (except perhaps randomness).

    It's tempting to conclude that the fact that our universe apparently follows computable laws of physics modulo randomness provides evidence for us being a simulation but this isn't entirely clear.  After all had our laws of physics provided access to non-computable oracles we would presumably not expect simulations to be so limited either.  Still, this is probably weak evidence for simulation as such non-computable behavior might well exist in the simulating universe but be practically infeasable to consult in computer hardware.  Thus our probability for seeing non-computable behavior should be higher conditional on not being a simulation than conditional on being a simulation.
  2. Limited ability to access true random sources.

    The most compelling evidence we could discover of simulation would be the signature of a psuedo-random number generator in the outcomes of `random' QM events.  Of course, as above, the simulating computers might have easy access to truly random number generators but it's also reasonable they lack practical access to true random numbers at a sufficient rate.
  3. Limited computational resources. 

    We always want our simulations to run faster and require less resources but we are limited by the power of our hardware.  In response we often resort to less accurate approximations when possible or otherwise engineer our simulation to require less computational resources.  This might appear in a simulated universe in several ways.
    • Computationally easy basic laws of physics. For instance the underlying linearity of QM (absent collapse) is evidence we are living in a simulation as such computations have a low computational complexity.  Another interesting piece of evidence would be discovering that an efficient global algorithm could be used that generates/uses collapse to speed computation.
    • Limited detail/minimal feature size.  An efficient simulation would be as course grained as possible while still yielding the desired behavior.  Since we don't know what the desired behavior might be for a universe simulation it's hard to evaluate this criteria but the indications that space is fundamentally quantized (rather than allowing structure at arbitrarily small scales) seems to be evidence for simulation.
    • Substitution of approximate calculations for expensive calculations in certain circumstances.  Weak evidence could be gained here by merely observing that the large scale behavior of the universe admits efficient accurate approximations but the key piece of data to support a simulated universe would be observations revealing that sometimes the universe behaved as if it was following a less accurate approximation rather than behaving as fundamental physics prescribed.  For instance discovering that distant galaxies behave as if they are a classical approximation rather than a quantum system would be extremely strong evidence. 
    • Ability to screen off or delay calculations in regions that aren't of interest.  A simulation would be more efficient if it allowed regions of less interest to go unsimilated or at least to delay that simulation without impacting the regions of greater interest.  While the finite speed of light arguably provides a way to delay simulation of regions of lesser interest QM's preservation of information and space-like quantum correlations may outweigh the finite speed of light on this point tipping it towards non-simulation.
  4. Limitations on precision.

    Arguably this is just a variant of 3 but it has some different considerations.  As with 3 we would expect a simulation to bottom out and not provide arbitrarily fine grained structure but in simulations precision issues also bring with them questions of stability.  If the law's of physics turn out to be relatively unaffected by tiny computational errors that would push in the direction of simulation but if they are chaotic and quickly spiral out of control in response to these errors it would push against simulation.  Since linear systems are virtually always stable te linearity of QM is yet again evidence for simulation.
  5. Limitations on sequential processing power.

    We find that finite speed limits on communication and other barriers prevent building arbitrarily fast single core processors.  Thus we would expect a simulation to be more likely to admit highly parallel algorithms.  While the finite speed of light provides some level of parallelizability (don't need to share all info with all processing units immediately) space-like QM correlations push against parallelizability.  However, given the linearity of QM the most efficient parallel algorithms might well be semi-global algorithms like those used for various kinds of matrix manipulation.  It would be most interesting if collapse could be shown to be a requirement/byproduct of such efficient algorithms.
  6. Imperfect hardware

    Finally there is the hope one might discover something like the Pentium division bug in the behavior of the universe.  Similarly one might hope to discover unexplained correlations in deviations from normal behavior, e.g., correlations that occur at evenly spaced locations relative to some frame of reference, arising from transient errors in certain pieces of hardware.

Software Fingerprints

Another type of fingerprint that might be left are those resulting from the conceptual/organizational difficulties occuring in the software design process.  For instance we might find fingerprints by looking for:

  1. Outright errors, particularly hard to spot/identify errors like race conditions or the like.  Such errors might allow spillover information about other parts of the software design that would let us distinguish them from non-simulation physical effects.  For instance, if the error occurs in a pattern that is reminiscent of a loop a simulation might execute but doesn't correspond to any plausible physical law it would be good evidence that it was truly an error.
  2. Conceptual simplicity in design.  We might expect (as we apparently see) an easily drawn line between initial conditions and the rules of the simulation rather than physical laws which can't be so easily divided up, e.g., laws that take the form of global constraint satisfaction.  Also relatively short laws rather than a long regress into greater and greater complexity at higher and higher energies would be expected in a simulation (but would be very very weak evidence).
  3. Evidence of concrete representations.  Even though mathematically relativity favors no reference frame over another often conceptually and computationally it is desierable to compute in a particular reference frame (just as it's often best to do linear algebra on a computer relative to an explicit basis).  One might see evidence for such an effect in differences in the precision of results or rounding artifacts (like those seen in re-sized images).

Design Fingerprints

This category is so difficult I'm not really going to say much about it but I'm including it for completeness.  If our universe is a simulation created by some intentional creature we might expect to see certain features receive more attention than others.  Maybe we would see some really odd jiggering of initial conditions just to make sure some events of interest occurred but without a good idea what is of interest it is hard to see how this could be done.  Another potential way for design fingerprints to show up is in the ease of data collection from the simulation.  One might expect a simulation to make it particularly easy to sift out the interesting information from the rest of the data but again we don't have any idea what interesting might be.

 

Other Fingerprints

I'm hoping the readers will suggest some interesting new ideas as to what one might look for if one was serious about gathering evidence about whether we are in a simulation or not.

What would an ultra-intelligent machine make of the great filter?

-3 James_Miller 28 November 2010 06:47PM

 

Imagine that an ultra-intelligent machine emerges from an intelligence explosion.  The AI (a) finds no trace of extraterrestrial intelligence, (b) calculates that many star systems should have given birth to star faring civilizations so mankind hasn’t pass through most of the Hanson/Grace great filter, and (c) realizes that with trivial effort it could immediately send out some self-replicating von Neumann machines that could make the galaxy more to its liking.  

Based on my admittedly limited reasoning abilities and information set I would guess that the AI would conclude that the zoo hypothesis is probably the solution to the Fermi paradox and because stars don’t appear to have been “turned off” either free energy is not a limiting factor (so the Laws of Thermodynamics are incorrect) or we are being fooled into thinking that stars unnecessarily "waste” free energy (perhaps because we are in a computer simulation).