Followup to This Failing Earth, Our society lacks good self-preservation mechanisms, Is short term planning in humans due to a short life or due to bias?
I don't mean that deciding to exterminate life is rational. But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.
Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):
Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.
Was this a bad decision? Well, consider the expected value to the people involved. Without the bomb, there was a much, much greater than 3/1,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese. The loss to them if they ignited the atmosphere would be another 30 or so years of life. The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life. The loss in being conquered would also be large. Easy decision, really.
Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/1,000,000 chance of eliminating life as we know it. Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1. If I've done my math right, that's ≈ 33,777,000 years.
This supposition seems reasonable to me. There is a balance between offensive and defensive capability that shifts as technology develops. If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed. In the near future, biological weapons will be more able to wipe out life than we are able to defend against them. We may then develop the ability to defend against biological attacks; we may then be safe until the next dangerous technology.
If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially. The 34M years remaining to life is then in subjective time, and must be mapped into realtime. If we suppose the subjective/real time ratio doubles every 100 years, this gives life an expected survival time of 2000 more realtime years. If we instead use Ray Kurzweil's figure of about 2 years, this gives life about 40 remaining realtime years. (I don't recommend Ray's figure. I'm just giving it for those who do.)
Please understand that I am not yet another "prophet" bemoaning the foolishness of humanity. Just the opposite: I'm saying this is not something we will outgrow. If anything, becoming more rational only makes our doom more certain. For the agents who must actually make these decisions, it would be irrational not to take these risks. The fact that this level of risk-tolerance will inevitably lead to the snuffing out of all life does not make the expected utility of these risks negative for the agents involved.
I can think of only a few ways that rationalilty can not inevitably exterminate all life in the cosmologically (even geologically) near future:
-
We can outrun the danger: We can spread life to other planets, and to other solar systems, and to other galaxies, faster than we can spread destruction.
-
Technology will not continue to develop, but will stabilize in a state in which all defensive technologies provide absolute, 100%, fail-safe protection against all offensive technologies.
-
People will stop having conflicts.
- Rational agents incorporate the benefits to others into their utility functions.
-
Rational agents with long lifespans will protect the future for themselves.
-
Utility functions will change so that it is no longer rational for decision-makers to take tiny chances of destroying life for any amount of utility gains.
- Independent agents will cease to exist, or to be free (the Singleton scenario).
Let's look at these one by one:
We can outrun the danger.
We will colonize other planets; but we may also figure out how to make the Sun go nova on demand. We will colonize other star systems; but we may also figure out how to liberate much of the energy in the black hole at the center of our galaxy in a giant explosion that will move outward at near the speed of light.
One problem with this idea is that apocalypses are correlated; one may trigger another. A disease may spread to another planet. The choice to use a planet-busting bomb on one planet may lead to its retaliatory use on another planet. It's not clear whether spreading out and increasing in population actually makes life more safe. If you think in the other direction, a smaller human population (say ten million) stuck here on Earth would be safer from human-instigated disasters.
But neither of those are my final objection. More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed.
Technology will stabilize in a safe state.
Maybe technology will stabilize, and we'll run out of things to discover. If that were to happen, I would expect that conflicts would increase, because people would get bored. As I mentioned in another thread, one good explanation for the incessant and counterproductive wars in the middle ages - a reason some of the actors themselves gave in their writings - is that the nobility were bored. They did not have the concept of progress; they were just looking for something to give them purpose while waiting for Jesus to return.
But that's not my final rejection. The big problem is that by "safe", I mean really, really safe. We're talking about bringing existential threats to chances less than 1 in a million per century. I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.
People will stop having conflicts.
That's a nice thought. A lot of people - maybe the majority of people - believe that we are inevitably progressing along a path to less violence and greater peace.
They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.
But that's not my final rejection either. The bigger problem is that this isn't something that arises only in conflicts. All we need are desires. We're willing to tolerate risk to increase our utility. For instance, we're willing to take some unknown, but clearly greater than one in a million chance, of the collapse of much of civilization due to climate warming. In return for this risk, we can enjoy a better lifestyle now.
Also, we haven't burned all physics textbooks along with all physicists. Yet I'm confident there is at least a one in a million chance that, in the next 100 years, some physicist will figure out a way to reduce the earth to powder, if not to crack spacetime itself and undo the entire universe. (In fact, I'd guess the chance is nearer to 1 in 10.)1 We take this existential risk in return for a continued flow of benefits such as better graphics in Halo 3 and smaller iPods. And it's reasonable for us to do this, because an improvement in utility of 1% over an agent's lifespan is, to that agent, exactly balanced by a 1% chance of destroying the Universe.
The Wikipedia entry on Large Hadcon Collider risk says, "In the book Our Final Century: Will the Human Race Survive the Twenty-first Century?, English cosmologist and astrophysicist Martin Rees calculated an upper limit of 1 in 50 million for the probability that the Large Hadron Collider will produce a global catastrophe or black hole." The more authoritative "Review of the Safety of LHC Collisions" by the LHC Safety Assessment Group concluded that there was at most a 1 in 1031 chance of destroying the Earth.
The LHC conclusions are criminally low. Their evidence was this: "Nature has already conducted the LHC experimental programme about one billion times via the collisions of cosmic rays with the Sun - and the Sun still exists." There followed a couple of sentences of handwaving to the effect that if any other stars had turned to black holes due to collisions with cosmic rays, we would know it - apparently due to our flawless ability to detect black holes and ascertain what caused them - and therefore we can multiply this figure by the number of stars in the universe.
I believe there is much more than a one-in-a-billion chance that our understanding in one of the steps used in arriving at these figures is incorrect. Based on my experience with peer-reviewed papers, there's at least a one-in-ten chance that there's a basic arithmetic error in their paper that no one has noticed yet. I'm thinking more like one-in-a-million, once you correct for the anthropic principle and for the chance that there is a mistake in the argument. (That's based on a belief that priors for anything likely enough that smart people even thought of the possibility should be larger than one in a billion, unless they were specifically trying to think of examples of low-probability possibilities such as all of the air molecules in the room moving to one side.)
The Trinity test was done for the sake of winning World War II. But the LHC was turned on for... well, no practical advantage that I've heard of yet. It seems that we are willing to tolerate one-in-a-million chances of destroying the Earth for very little benefit. And this is rational, since the LHC will probably improve our lives by more than one part in a million.
Rational agents incorporate the benefits to others into their utility functions.
"But," you say, "I wouldn't risk a 1% chance of destroying the universe for a 1% increase in my utility!"
Well... yes, you would, if you're a rational expectation maximizer. It's possible that you would take a much higher risk, if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility. (This seems difficult, but is worth exploring.) If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn't. It's a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it's already in there.
The US national debt should be enough to convince you that people act in their self-interest. Even the most moral people - in fact, especially the "most moral" people - do not incorporate the benefits to others, especially future others, into their utility functions. If we did that, we would engage in massive eugenics programs. But eugenics is considered the greatest immorality.
But maybe they're just not as rational as you. Maybe you really are a rational saint who considers your own pleasure no more important than the pleasure of everyone else on Earth. Maybe you have never, ever bought anything for yourself that did not bring you as much benefit as the same amount of money would if spent to repair cleft palates or distribute vaccines or mosquito nets or water pumps in Africa. Maybe it's really true that, if you met the girl of your dreams and she loved you, and you won the lottery, put out an album that went platinum, and got published in Science, all in the same week, it would make an imperceptible change in your utility versus if everyone you knew died, Bernie Madoff spent all your money, and you were unfairly convicted of murder and diagnosed with cancer.
It doesn't matter. Because you would be adding up everyone else's utility, and everyone else is getting that 1% extra utility from the better graphics cards and the smaller iPods.
But that will stop you from risking atmospheric ignition to defeat the Nazis, right? Because you'll incorporate them into your utility function? Well, that is a subset of the claim "People will stop having conflicts." See above.
And even if you somehow worked around all these arguments, evolution, again, thwarts you.2 Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents. The claim that rational agents are not selfish implies that rational agents are unfit.
Rational agents with long lifespans will protect the future for themselves.
The most familiar idea here is that, if people expect to live for millions of years, they will be "wiser" and take fewer risks with that time. But the flip side is that they also have more time to lose. If they're deciding whether to risk igniting the atmosphere in order to lower the risk of being killed by Nazis, lifespan cancels out of the equation.
Also, if they live a million times longer than us, they're going to get a million times the benefit of those nicer iPods. They may be less willing to take an existential risk for something that will benefit them only temporarily. But benefits have a way of increasing, not decreasing, over time. The discovery of the law of gravity and of the invisible hand benefit us in the 21st century more than they did the people of the 17th century.
But that's not my final rejection. More important is time-discounting. Agents will time-discount, probably exponentially, due to uncertainty. If you considered benefits to the future without exponential time-discounting, the benefits to others and to future generations would outweigh any benefits to yourself so much that in many cases you wouldn't even waste time trying to figure out what you wanted. And, since future generations will be able to get more utility out of the same resources, we'd all be obliged to kill ourselves, unless we reasonably think that we are contributing to the development of that capability.
Time discounting is always (so far) exponential, because non-asymptotic functions don't make sense. I supposed you could use a trigonometric function instead for time discounting, but I don't think it would help.
Could a continued exponential population explosion outweigh exponential time-discounting? Well, you can't have a continued exponential population explosion, because of the speed of light and the Planck constant. (I leave the details as an exercise to the reader.)
Also, even if you had no time-discounting, I think that a rational agent must do identity-discounting. You can't stay you forever. If you change, the future you will be less like you, and weigh less strongly in your utility function. Objections to this generally assume that it makes sense to trace your identity by following your physical body. Physical bodies will not have a 1-1 correspondence with personalities for more than another century or two, so just forget that idea. And if you don't change, well, what's the point of living?
Evolutionary arguments may help us with self-discounting. Evolutionary forces encourage agents to emphasize continuity or ancestry over resemblance in an agent's selfness function. The major variable is reproduction rate over lifespan. This applies to genes or memes. But they can't help us with time-discounting.
I think there may be a way to make this one work. I just haven't thought of it yet.
A benevolent singleton will save us all.
This case takes more analysis than I am willing to do right now. My short answer is that I place a very low expected utility on singleton scenarios. I would almost rather have the universe eat, drink, and be merry for 34 million years, and then die.
I'm not ready to place my faith in a singleton. I want to work out what is wrong with the rest of this argument, and how we can survive without a singleton.
(Please don't conclude from my arguments that you should go out and create a singleton. Creating a singleton is hard to undo. It should be deferred nearly as long as possible. Maybe we don't have 34 million years, but this essay doesn't give you any reason not to wait a few thousand years at least.)
In conclusion
I think that the figures I've given here are conservative. I expect existential risk to be much greater than 3/1,000,000 per century. I expect there will continue to be externalities that cause suboptimal behavior, so that the actual risk will be greater even than the already-sufficient risk that rational agents would choose. I expect population and technology to continue to increase, and existential risk to be proportional to population times technology. Existential risk will very possibly increase exponentially, on top of the subjective-time exponential.
Our greatest chance for survival is that there's some other possibility I haven't thought of yet. Perhaps some of you will.
1 If you argue that the laws of physics may turn out to make this impossible, you don't understand what "probability" means.
2 Evolutionary dynamics, the speed of light, and the Planck constant are the three great enablers and preventers of possible futures, which enable us to make predictions farther into the future and with greater confidence than seem intuitively reasonable.
You are starting from the premise that gray goo scenarios are likely, and trying to rationalize your belief.
Yes, we can be clever and think of humans as green goo - the ultimate in green goo, really. That isn't what we're talking about and you know it - yes, intelligent life can spread out everywhere, that isn't what we're worried about. We're worried about unintelligent things wiping out intelligent things.
The great oxygenation event is not actually an example of a green goo type scenario, though it is an interesting thing to consider - I'm not sure if there even is a generalized term for that kind of scenario, as it was essentially slow atmospheric poisoning. It would be more of a generalized biocide type scenario - the cyanobacteria which caused the great oxygenation event created something which was incidentally toxic to other things, but it was purely incidental, had nothing to do with their own action, probably didn't even benefit most of them directly (that is to say, the toxicity of the oxygen they produced probably didn't help them personally), and what actually took over afterwards were things which were rather different from what came before, many of which were not descended from said cyanobacteria.
It was a major atmospheric change, and is (theoretically) a danger, though I'm not sure how much of an actual danger it is in the real world - we saw the atmosphere shift to an oxygen-dominated one, but I'm not sure how you'd do it again, as I'm not sure there's something else which can be freed en-mass which is toxic - better oxygenators than oxygen are hard to come by, and by their very nature are rather difficult to liberate from an energy balance standpoint. It seems likely that our atmosphere is oxygen-based and not, say, chlorine or fluorine based for a reason arising from the physics of liberating said chemicals from chemical compounds.
As far as repeated green goo scenarios prior to 600Mya - I think that's pretty unlikely, honestly. Looking at microbial diversity and microbial genomes, we see that the domains of life are ridiculously ancient, and that diversity goes back an enormously long distance in time. It seems very unlikely that repeated green goo type scenarios would spare the amount of diversity we actually see in the real world. Eukaryotic life arose 1.6-2.1Bya, and as far as multicellular life goes, we've evidence of cyanobacteria which showed signs of multicellularity 3Bya.
That's a long, long time, and it seems unlikely that repeated gray goo scenarios are what kept life simple. It seems more likely that what kept life simple was the fact that complexity is hard - indeed, I suspect the big advancement was actually major advancements in modularity of life. The more modular life becomes, the easier it is to evolve quickly and adapt to new circumstances, but modularity from non-modularity is something which is pretty tough to sort out. Once things did sort it out, though, we saw a massive explosion in diversity. Evolving to be better at evolving is a good strategy for continuing to exist, and I suspect that complex multicelluar life only came to exist when stuff got to the point where this could happen.
If we saw repeated green goo scenarios, we'd expect the various branches of life to be pretty shallow - even if some diversity survived, we'd expect each diverse group to show a major bottleneck back at whenever the last green goo occurred. But that's not what we actaully see. Fungi and animals diverged about 1.5 Bya, for instance, and other eukaryotic diversity occurred even prior to that. Animals have been diverging for 1.2 billion years.
It seems unlikely, then, that there have been any green goo scenarios in a very, very long time, if indeed they ever did occur. Indeed, it seems likely that life evolved to prevent said scenarios, and did so successfully, as none have occurred in a very, very, very long time.
Pestilence is not even close to green goo. Yes, introducing a new disease into a new species can be very nasty, but it almost never actually is, as most of the time, it just doesn't work at all. Even amongst the same species, Smallpox and other old-world diseases wiped out the Native Americans, but Native American diseases were not nearly so devastating to the old-worlders.
Most things which try to jump the species barrier have a great deal of difficulty in doing so, and even when they successfully do so, their virulence ends up dropping over time because being ridiculously fatal is actually bad for their own continued propagataion. And humans have become increasingly better at stopping this sort of thing. I did note engineered plagues as the most likely technological threat, but comparing them to gray goo scenarios is very silly - pathogens are enormously easier to control. The trouble with stuff like gray goo is that it just keeps spreading, but with a pathogen, it requires a host - there are all sorts of barriers in place to pathogens, and everything is evolved to be able to deal with pathogens because they sometimes have to deal with even new ones, and things which are more likely to survive exposure to novel pathogens are more likely to pass on their genes in the long term.
With regards to "intelligent viral networks" - this is just silly. Life on earth is NOT the result of intelligence. You can tell this from our genomes. There are no signs of engineering ANYWHERE in us; no signs of intelligent design.
The gray goo is predicated on the sort of thinking common in bad scifi.
Basically, in scifi the nanotech self replicators which eat everything in their path are created in one step. As opposed to realistic depiction of technological progress where the first nanotech replicators have to sit in a batch of special nutrients and be microwaved, or otherwise provided energy, while being kept perfectly sterile (to keep bacteria from eating your nanotech). Then it'd get gradually improved in great many steps and find many uses ranging from cancer cure to dishwashe... (read more)