You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Estimating the probability of human extinction

5 Post author: philosophytorres 17 February 2016 04:19PM

I'm looking for feedback on the following idea. The article from which it's been excerpted can be found here: http://ieet.org/index.php/IEET/more/torres20120213

"But not only has the number of scenarios increased in the past 71 years, many riskologists believe that the probability of a global disaster has also significantly risen. Whereas the likelihood of annihilation for most of our species’ history was extremely low, Nick Bostrom argues that “setting this probability lower than 25% [this century] would be misguided, and the best estimate may be considerably higher.” Similarly, Sir Martin Rees claims that a civilization-destroying event before the year 02100 is as likely as getting a “heads” after flipping a coin. These are only two opinions, of course, but to paraphrase the Russell-Einstein Manifesto, my experience confirms that those who know the < most tend to be the most gloomy

"I [would] argue that Rees’ figure is plausible. To adapt a maxim from the philosopher David Hume, wise people always proportion their fears to the best available evidence, and when one honestly examines this evidence, one finds that there really is good reason for being alarmed. But I also offer a novel — to my knowledge — argument for why we may be systematically underestimating the overall likelihood of doom. In sum, just as a dog can’t possibly comprehend any of the natural and anthropogenic risks mentioned above, so too could there be risks that forever lie beyond our epistemic reach. All biological brains have intrinsic limitations that constrain the library of concepts to which one has access. And without concepts, one can’t mentally represent the external world. It follows that we could be “cognitively closed” to a potentially vast number of cosmic risks that threaten us with total annihilation. This being said, one might argue that such risks, if they exist at all, must be highly improbable, since Earth-originating life has existed for some 3.5 billion years without an existential catastrophe having happened. But this line of reasoning is deeply flawed: it fails to take into account that the only worlds in which observers like us could find ourselves are ones in which such a catastrophe has never occurred. It follows that a record of past survival on our planetary spaceship provides no useful information about the probability of certain existential disasters happening in the future. The facts of cognitive closure plus the observation selection effect suggest that our probability conjectures of total annihilation may be systematically underestimated, perhaps by a lot."

 

Thoughts?

Comments (33)

Comment author: OrphanWilde 17 February 2016 06:11:15PM 4 points [-]

My prior on humans going extinct in the next century is less than .05%, because in the last 2,000 centuries, we haven't. My prior on civilization ending in the next century is less than 1.6%, because in the last 60 centuries, it hasn't.

I have, however, absolutely no faith in those priors, because in any universe in which either of those things happened, nobody is asking those questions.

As for evidence that would update those priors? Well... I see no evidence of anything that could or would end our species, so I have nothing to update on there. I see some evidence of things that could end our civilization, so that could be updated slightly higher. But my error bars dominate the question.

So I'm going to say <1% odds of humans (or rather, human minds) going extinct in the next century. Indeed, I'd bet my life it won't happen.

Comment author: SoerenE 18 February 2016 07:11:28AM 1 point [-]

Could you elaborate on why you consider p(UFAI before 2116) < 0.01? I am genuinely interested.

Comment author: OrphanWilde 18 February 2016 01:39:37PM 6 points [-]

I consider a runaway process by which any AI ascends into godhood through recursive self-improvement of its intelligence to be... vaguely magical, by which I mean that while every word in that sentence makes sense, as a whole that sentence doesn't refer to anything. The heavy lifting is done by poorly-defined abstractions and assumptions.

Unfriendly AI, by the metrics I consider meaningful, already exists. It just isn't taking over the world.

Comment author: SoerenE 18 February 2016 08:14:58PM *  1 point [-]

Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly. Let us use the standard AI risk metric.

I feel that your sentence does refer to something: A hypothetical scenario. ("Godhood" should be replaced with "Superintelligence").

Is it correct that the sentence can be divided into these 4 claims?:

  1. An AI self-improves it's intelligence
  2. The self-improvement becomes recursive
  3. An AI reaches superintelligence through 1 and 2
  4. This can happen in a process that can be called "runaway"

Do you mean that one of the probabilities is extremely small? (E.g., p(4 | 1 and 2 and 3) = 0.02). Or do you mean that the statement is not well-formed? (E.g, Intelligence is poorly-defined by the AI Risk theory)

Comment author: OrphanWilde 18 February 2016 08:47:38PM 4 points [-]

Intelligence is poorly-defined, for a start, artificial intelligence doubly so - think about the number of times we've redefined "AI" after achieving what we previously called "AI".

"Recursive self-improvement" is also poorly-defined; as an example, we have recursive self-improving AIs right now, in the form of self-training neural nets.

Superintelligence is even less well-defined, which is why I prefer the term "godhood", which I regard as more honest in its vagueness. It may also be illusory; most of us on Less Wrong are here in part because of boredom, because intelligence isn't nearly as applicable in daily life as we'd need it to be to stay entertained; does intelligence have diminishing returns?

We can tell that some people are smarter than other people, but we're not even certain what that means, except that they do better by the measurement we measure them by.

Comment author: SoerenE 19 February 2016 08:07:58AM *  -1 points [-]

Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.

Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.

I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with "superintelligence" for now.

Comment author: Lumifer 18 February 2016 08:43:42PM *  4 points [-]

Is it correct that the sentence can be divided into these 4 claims?:

You are missing an important claim: that the process of recursive self-improvement does not encounter any constraints, impediments, roadblocks, etc.

Consider the analogy of your 1. and 2. for human reproduction.

Comment author: SoerenE 19 February 2016 07:22:35AM *  0 points [-]

I meant claim number 3 to be a sharper version of your claim: The AI will meet constraints, impediments and roadblocks, but these are overcome, and the AI reaches superintelligence.

Could you explain the analogy with human reproduction?

Comment author: Lumifer 19 February 2016 03:47:57PM 3 points [-]

Ah, so you meant the accent in 3. to be on "reaches", not on "super"?

The analogy looks like this: 1. Humans multiply, they self-improve their numbers; 2. The reproduction is recursive -- the larger a generation is, the yet larger will the next one be. Absent constraints, the growth of a population is exponential.

Comment author: SoerenE 19 February 2016 08:00:50PM *  0 points [-]

English is not my first language. I think I would put the accent on "reaches", but I am unsure what would be implied by having the accent on "super". I apologize for my failure to write clearly.

I now see the analogy with human reproduction. Could we stretch the analogy to claim 3, and call some increases in human numbers "super"?

The lowest estimate of the historical number of humans I have seen is from https://en.wikipedia.org/wiki/Population_bottleneck , claiming down to 2000 humans for 100.000 years. Human numbers will probably reach a (mostly cultural) limit of 10.000.000.000. I feel that this development in human numbers deserves to be called "super".

The analogy could perhaps even be stretched to claim 4 - some places at some times could be characterized by "runaway population growth".

Comment author: Lumifer 19 February 2016 08:28:56PM 2 points [-]

Could we stretch the analogy to claim 3, and call some increases in human numbers "super"?

I don't know -- it all depends on what you consider "super" :-) Populations of certain organisms oscillate with much greater magnitude than humans -- see e.g. algae blooms.

Comment author: SoerenE 20 February 2016 03:20:22PM 0 points [-]

Like Unfriendly AI, algae blooms are events that behave very differently from events we normally encounter.

I fear that the analogies have lost a crucial element. OrphanWIlde considered Unfriendly AI "vaguely magical" in the post here. The algae bloom analogy also has very vague definitions, but the changes in population size of an algae bloom is a matter I would call "strongly non-magical".

I realize that you introduced the analogies to help make my argument precise.

Comment author: TheAncientGeek 21 February 2016 05:40:51PM 1 point [-]

Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly.

But they are not arguably dangerous because they are intelligent.

Comment author: buybuydandavis 18 February 2016 03:35:47AM 3 points [-]

Estimating the probability of human extinction

You need to be clear about whether you're talking about the extinction of the human species, or the extinction of all descendants of the human species. Does death by Unfriendly AI count as extinction, or evolution?

I like that both Bostrom and Rees attached dates to their estimates, though Rees talked about a civilization destroying event, not an extinction event.

since Earth-originating life has existed for some 3.5 billion years without an existential catastrophe having happened.

Human life is only a blink in the eye of that 3.5billion years. How much of those 3.5billion years would be compatible with human life? How many catastrophes have occurred in that time which would wipe us out today? Would wipe out civilization today?

Comment author: James_Miller 18 February 2016 01:43:26AM *  2 points [-]

Great overall, but I disagree with this "while colonization would insulate us against a number of potential existential risks, there are some risks that it wouldn’t stop. A physics disaster on Earth, for example, could have consequences that are cosmic in scope. For example, the universe might not be in its most stable state. Consequently, a high-powered particle accelerator could tip the balance, resulting in a 'catastrophic vacuum decay, with a bubble of the true vacuum expanding at the speed of light.”'

If a positive singularity occurs and the solution to the Fermi paradox is that we are alone I would like to make a copy of myself and put that copy on a spaceship that travels fast enough away from earth so that (given sufficient time) when you add in the expansion of the universe something starting at earth and traveling at the speed of light would not be able to reach me. As I understand it, once I have traveled far enough from earth it will be impossible for something from earth to reach me regardless of my speed.

Comment author: SoerenE 18 February 2016 07:19:08AM 0 points [-]

I've seen this claim many places, including in the Sequences, but I've never been able to track down an authoritative source. It seems false in classical physics, and I know little about relativity. Unfortunately, my Google-Fu is too weak to investigate. Can anyone help?

Comment author: _rpd 18 February 2016 10:36:57AM 4 points [-]

this claim

Do you mean the metric expansion of space?

https://en.wikipedia.org/wiki/Metric_expansion_of_space

Because this expansion is caused by relative changes in the distance-defining metric, this expansion (and the resultant movement apart of objects) is not restricted by the speed of light upper bound of special relativity.

Comment author: SoerenE 19 February 2016 07:14:19AM 0 points [-]

Thank you. It is moderately clear to me from the link that James' thought-experiment is possible.

Do you know of a more authoritative description of the thought-experiment, preferably with numbers? It would be nice to have an equation where you give the speed of James' spaceship and the distance to it, and calculate if the required speed to catch it is above the speed of light.

Comment author: _rpd 19 February 2016 06:59:08PM *  1 point [-]

Naively, the required condition is v + dH > c, where v is the velocity of the spaceship, d is the distance from the threat and H is Hubble's constant.

However, when discussing distances on the order of billions of light years and velocities near the speed of light, the complications are many, not to mention an area of current research. For a more sophisticated treatment see user Pulsar's answer to this question ...

http://physics.stackexchange.com/questions/60519/can-space-expand-with-unlimited-speed/

... in particular the graph Pulsar made for the answer ...

http://i.stack.imgur.com/Uzjtg.png

... and/or the Davis and Lineweaver paper [PDF] referenced in the answer.

Comment author: SoerenE 19 February 2016 08:16:47PM 0 points [-]

Wow. It looks like light from James' spaceship can indeed reach us, even if light from us cannot reach the spaceship.

Comment author: _rpd 19 February 2016 09:39:18PM 1 point [-]

Yes, until the distance exceeds the Hubble distance of the time, then the light from the spaceship will red shift out of existence as it crosses the event horizon. Wiki says that in around 2 trillion years, this will be true for light from all galaxies outside the local supercluster.

Comment author: philosophytorres 22 February 2016 05:51:11PM 1 point [-]

Thanks so much for these incredibly thoughtful responses. Very, very helpful.

Comment author: DanArmak 21 February 2016 05:30:23PM 0 points [-]

In sum, just as a dog can’t possibly comprehend any of the natural and anthropogenic risks mentioned above, so too could there be risks that forever lie beyond our epistemic reach

This seems to prove too much. For any outcome X, there may be ways of reaching it that are "beyond our epistemic reach", therefore the true probability must be higher than we think.

This also holds for ~X (not necessarily with the same evidential weight as for X). For instance, there may be events we are incapable of imagining which would drastically reduce existential risk, just like FAI would.

One might argue that such risks, if they exist at all, must be highly improbable, since Earth-originating life has existed for some 3.5 billion years without an existential catastrophe having happened. But this line of reasoning is deeply flawed: it fails to take into account that the only worlds in which observers like us could find ourselves are ones in which such a catastrophe has never occurred.

K-T, the extinction event famous for killing the dinosaurs, was only 66 Mya (million years ago), and would pretty certainly kill all humans if it were to reoccur today. So could other big extinction events that occurred in the past, like snowball earth scenarios.

The anthropic argument is misapplied here. Evolution has been extremely slow and 'undirected' in producing humans. If planetary sterilization events were much more common, then we would expect to observe a much shorter evolutionary past: tens of millions of years, not billions of them.

Comment author: James_Miller 21 February 2016 08:29:32PM *  1 point [-]

If planetary sterilization events were much more common, then we would expect to observe a much shorter evolutionary past: tens of millions of years, not billions of them.

If planetary sterilization events are common and it takes a long time for intelligent life to develop then we would expect to observe the Fermi paradox.

Comment author: DanArmak 23 February 2016 03:09:58PM 0 points [-]

What evidence is there that it takes a lot of time for intelligence to evolve, in the sense of requiring very many sequential steps?

To me it seems intelligence is simply unlikely to evolve at any point of time. Roughly equally unintelligent animals may have existed for tens or hundreds millions of years before humans evolved in a few million years. (Who's to say if the ancient ancestors of birds 100 million years ago were as smart as some birds are today?) Before that, life existed for billions of years before multicellular creatures evolved.

Comment author: James_Miller 23 February 2016 03:17:04PM 0 points [-]

What evidence is there that it takes a lot of time for intelligence to evolve, in the sense of requiring very many sequential steps?

You probably need multicellular life first, and this takes a while and does involve many steps.

Comment author: DanArmak 24 February 2016 02:38:56PM 0 points [-]

What evidence do we have that it takes a long time, other than that it happened late in history, which we already accounted for? My impression is that there weren't progressively-more-multicellular forms evolving into one another over a very long period of time. The first animals lived possibly less than 700 Mya; complex Ediacaran animals appeared 575 Mya; and by 510 Mya we had ostracoderms, which were surely fully multicellular (i.e. with a germline and complex cell differentiation and organs).

That's on the order of 100 million years for some complexity, and possibly some more tens of millions of years for more. But it's also possible that multicellularity evolved much more quickly, and the animals just didn't evolve larger and more complex forms for a while due to e.g. low sea oxygen levels, not having evolved eyes yet, etc.