Using the Copernican mediocrity principle to estimate the timing of AI arrival
Gott famously estimated the future time duration of the Berlin wall's existence:
“Gott first thought of his "Copernicus method" of lifetime estimation in 1969 when stopping at the Berlin Wall and wondering how long it would stand. Gott postulated that the Copernican principle is applicable in cases where nothing is known; unless there was something special about his visit (which he didn't think there was) this gave a 75% chance that he was seeing the wall after the first quarter of its life. Based on its age in 1969 (8 years), Gott left the wall with 75% confidence that it wouldn't be there in 1993 (1961 + (8/0.25)). In fact, the wall was brought down in 1989, and 1993 was the year in which Gott applied his "Copernicus method" to the lifetime of the human race”. “https://en.wikipedia.org/wiki/J._Richard_Gott
The most interesting unknown in the future is the time of creation of Strong AI. Our priors are insufficient to predict it because it is such a unique task. So it is reasonable to apply Gott’s method.
AI research began in 1950, and so is now 65 years old. If we are currently in a random moment during AI research then it could be estimated that there is a 50% probability of AI being created in the next 65 years, i.e. by 2080. Not very optimistic. Further, we can say that the probability of its creation within the next 1300 years is 95 per cent. So we get a rather vague prediction that AI will almost certainly be created within the next 1000 years, and few people would disagree with that.
But if we include the exponential growth of AI research in this reasoning (the same way as we do in Doomsday argument where we use birth rank instead of time, and thus update the density of population) we get a much earlier predicted date.
We can get data on AI research growth from Luke’s post:
“According to MAS, the number of publications in AI grew by 100+% every 5 years between 1965 and 1995, but between 1995 and 2010 it has been growing by about 50% every 5 years. One sees a similar trend in machine learning and pattern recognition.”
From this we could conclude that doubling time in AI research is five to ten years (update by adding the recent boom in neural networks which is again five years)
This means that during the next five years more AI research will be conducted than in all the previous years combined.
If we apply the Copernican principle to this distribution, then there is a 50% probability that AI will be created within the next five years (i.e. by 2020) and a 95% probability that AI will be created within next 15-20 years, thus it will be almost certainly created before 2035.
This conclusion itself depends of several assumptions:
• AI is possible
• The exponential growth of AI research will continue
• The Copernican principle has been applied correctly.
Interestingly this coincides with other methods of AI timing predictions:
• Conclusions of the most prominent futurologists (Vinge – 2030, Kurzweil – 2029)
• Survey of the field of experts
• Prediction of Singularity based on extrapolation of history acceleration (Forrester – 2026, Panov-Skuns – 2015-2020)
• Brain emulation roadmap
• Computer power brain equivalence predictions
• Plans of major companies
It is clear that this implementation of the Copernican principle may have many flaws:
1. The one possible counterargument here is something akin to a Murphy law, specifically one which claims that any particular complex project requires much more time and money before it can be completed. It is not clear how it could be applied to many competing projects. But the field of AI is known to be more difficult than it seems to be for researchers.
2. Also the moment at which I am observing AI research is not really random, as it was in the Doomsday argument created by Gott in 1993, and I probably will not be able to apply it to a time before it become known.
3. The number of researchers is not the same as the number of observers in the original DA. If I were a researcher myself, it would be simpler, but I do not do any actual work on AI.
Perhaps this method of future prediction should be tested on simpler tasks. Gott successfully tested his method by predicting the running time of Broadway shows. But now we need something more meaningful, but testable in a one year timeframe. Any ideas?
Doomsday argument for Anthropic Decision Theory
tl;dr: there is no real Doomsday argument in ADT. Average utilitarians over-discount the future compared with total utilitarians, but ADT can either increase or decrease this effect. The SIA Doomsaday argument can also be constructed, but this is simply a consequence of total utilitarian preferences, not of increased probability of doom.
I've been having a lot of trouble formulating a proper version of the doomsday argument for Anthropic Decision Theory (ADT). ADT mimics SIA-like decisions (for total utilitarians, those with a population independent utility function, and certain types of selfish agents), and SSA-like decisions (for average utilitarians, and a different type of selfish agent). So all paradoxes of SIA and SSA should be formulatable in it. And that is indeed the case for the presumptuous philosopher and the Adam and Eve paradox. But I haven't found a good formulation of the Doomsday argument.
And I think I know why now. It's because the Doomsday argument-like effects come from the preferences of those average utilitarian agents. Adding anthropic effects does not make the Doomsday argument stronger! It's a non-anthropic effect of those preferences. ADT may allow certain selfish agents to make acausal contracts that make them behave like average utilitarian agents, but it doesn't add any additional effect.
Doomsday decisions
Since ADT is based on decisions, rather than probabilities, we need to formulate the Doomsday argument in decision form. The most obvious method is a decision that affects the chances of survival of future generations.
But those decisions are dominated by whether the agent desires future generations or not! Future generations of high average happiness are desired, those of lower average happiness are undesirable. This effect dominates the decisions of average utilitarians, making it hard to formulate a decision that addresses 'risk of doom' in isolation. There is one way of doing this, though: looking at how agents discount the future.
Discounting the future
Consider the following simple model. If humanity survives for n generations, there will have been a total of Gqn humans who ever lived, for some G (obviously q>1). At each generation, there is an independent probability p of extinction, and pq < 1 (so the expected population is finite). At each generation, there is an (independent) choice of consuming a resource to get X utilities, or investing it for the next generation, who will automatically consume it for rX utilities.
Assume we are now at generation n. From the total utilitarian perspective, consuming the resource gives X with certainty, and rX with probability p. So the total utilitarian will delay consumption iff pr>1.
The average utilitarian must divide by total population. Let C be the current expected reciprocal of the population. Current consumption gives an expected XC utilities. By symmetry arguments, we can see that, if humanity survives to the next generation (an event of probability p), the expected reciprocal of population is C/q. If humanity doesn't survive, there is no delayed consumption; so the expected utility of delaying consumption is prXC/q. Therefore the average utilitarian will delay consumption iff pr/q > 1.
So the average utilitarian acts as if they discounted the future by p/q, while the total utilitarian discounts it by p. In a sense, the average utilitarian seems to fear the future more.
But where's the ADT in this? I've derived this result just by considering what an average utilitarian would do for any given n. Ah, but that's because of the particular choice I've made for population growth and risk rate. A proper ADT average utilitarian would compute the different pi and qi for all generation steps and consider the overall value of "consume now" decisions. In general, this could result in discounting that is either higher or lower than the myoptic, one-generation only, average utilitarian. The easy way to see this is to imagine that p is as above (and p is small), as are almost all the q - except for qn. Then the ADT average utilitarian discount rate is still roughly p/q, while the myoptic average utilitarian discount rate at generation n is p/qn, which could be anything.
So the "Doomsday argument" effect - the higher discounting of the future - is an artefact of average utilitarianism, while the anthropic effects of ADT can either increase or decrease this effect.
SIA Doomsday
LessWronger turchin reminded me of Katja Grace's SIA doomsday argument. To simplify this greatly, it's the argument that since SIA prefers worlds with many people in them (most especially many people "like us"), this increases the probability that there are/were/will be many civilizations at our level of development. Hence the Great Filter - the process that stops the universe from being filled with life - is most likely in the future for our kind of civilizations. Hence the probability of doom is higher.
How does this work, translated into ADT format? Well, imagine there were two options: either the great filter is in the distant evolutionary past, or is in the future. The objective uncertainty is 50-50 on either possibility. If the great filter is in the future, your civilization has a probability p of getting through it (thus there is a total probability of p/2 of your civilization succumbing to a future great filter). You have the option of paying a cost C to avoid the great filter entirely for your civilization. You derive a benefit B from your civilization surviving.
Then you will pay C iff C<Bp/2. But now imagine that you are a total utilitarian, you also care about the costs and benefits from other civilizations, and you consider your decision is linked with theirs via ADT. If the great filter is early, let's assume that your civilization is the only one still in existence. If the great filter is late, then there are Ω civilizations still around.
Therefore if the great filter is early, the total cost is C (your civilization, the only one around, pays C, but gets no benefit as there is no late great filter). However, if the great filter is late, the total cost is ΩC and the total benefit is ΩBp (all of Ω civilizations pay C and get benefit B with probability p). So the expected utility gain is ΩBp-(Ω+1)C. So you will pay the cost iff C < BpΩ/(Ω+1).
To an outsider this looks like you believe the probability of a late great filter is Ω/(Ω+1), rather than 0.5. However, this is simply a consequence of your total utilitarian preferences, and don't reflect an objectively larger chance of death.
Of all the SIA-doomsdays in the all the worlds...
Ideas developed with Paul Almond, who kept on flogging a dead horse until it started showing signs of life again.
Doomsday, SSA and SIA
Imagine there's a giant box filled with people, and clearly labelled (inside and out) "(year of some people's lord) 2013". There's another giant box somewhere else in space-time, labelled "2014". You happen to be currently in the 2013 box.
Then the self-sampling assumption (SSA) produces the doomsday argument. It works approximately like this: SSA has a preference for universe with smaller numbers of observers (since it's more likely that you're one-in-a-hundred than one-in-a-billion). Therefore we expect that the number of observers in 2014 is smaller than we would otherwise "objectively" believe: the likelihood of doomsday is higher than we thought.
What about the self-indication assumption (SIA) - that makes the doomsday argument go away, right? Not at all! SIA has no effect on the number of observers expected in the 2014, but increases the expected number of observers in 2013. Thus we still expect that the number of observers in 2014 to be lower than we otherwise thought. There's an SIA doomsday too!
Enter causality
What's going on? SIA was supposed to defeat the doomsday argument! What happens is that I've implicitly cheated - by naming the boxes "2013" and "2014", I've heavily implied that these "boxes" figuratively correspond two subsequent years. But then I've treated them as independent for SIA, like two literal distinct boxes.
The REAL SIA doomsday
Many thanks to Paul Almond for developing the initial form of this argument.
My previous post was somewhat confusing and potentially misleading (and the idea hadn't fully gelled in my mind). But here is a much easier way of seeing what the SIA doomsday really is.
Imagine if your parents had rolled a dice to decide how many children to have. Knowing only this, SIA implies that the dice was more likely to have been a "6" that a "1" (because there is a higher chance of you existing in that case). But, now following the family tradition, you decide to roll a dice for your children. SIA now has no impact: the dice is equally likely to be any number. So SIA predicts high numbers in the past, and no preferences for the future.
This can be generalised into an SIA "doomsday":
- Everything else being equal, SIA implies that the population growth rate in your past is likely to be higher than the rate in the future; i.e. it predicts an observed decline, not in population, but in population growth rates.
SIA doomsday
Edit: the argument is presented more clearly in a subsequent post.
Many thanks to Paul Almond for developing the initial form of this argument.
It is well known in these circles that the self-sampling assumption (SSA) leads to the doomsday argument. The self-indication assumption (SIA) was developed to counter the doomsday argument. This is a old debate; but what is interesting is that SIA has its own doomsday argument - of a rather interesting and different form.
To see this, let's model the population of a planet somewhat like Earth. From century to century, the planet's population can increase, decrease or stay the same with equal probability. If it increase, it will increase by one billion two thirds of the time, and by two billion one third of the time - and the same for decreases (if it overshoots zero, it stops at zero). Hence, each year, the probability of population change is:
| Pop level change |
+2 Billion | +1 Billion | +0 Billion | -1 Billion | -2 Billion |
|---|---|---|---|---|---|
| Probability |
1/9 |
2/9 |
3/9 |
2/9 |
1/9 |
During the century of the Three Lice, there were 3 billion people on the planet. Two centuries later, during the century of the Anchovy, there will still be 3 billion people on the planet. If you were alive on this planet during the intermediate century (the century of the Fruitbat), and knew those two facts, what would your estimate be for the current population?
From the outside, this is easy. The most likely answer if that there is still 3 billion in the intermediate century, which happens with probability 9/19 (= (3/9)*(3/9), renormalised). But there can also be 4 or 2 billion, with probabilities 4/19 each, or 5 or 1 billion, with probabilities 1/19 each. The expected population is 3 billion, as expected.
Now let's hit this with SIA. This weighs the populations by their sizes, changing the probabilities to 5/57, 16/57, 27/57, 8/57and 1/57, for populations of five, four, three, two and one billion respectively. Larger populations are hence more likely; the expected population is about 3.28 billion.
(For those of you curious about what SSA says, that depends on the reference class. For the reference class of people alive during the century of the Fruitbat, it gives the same answer as the outside answer. As the reference class increases, it moves closer to SIA.)
SIA doomsday
So SIA tells us that we should expect a spike during the current century - and hence a likely decline into the next century. The exact numbers are not important: if we know the population before our current time and the population after, then SIA implies that the current population should be above the trendline. Hence (it seems) that SIA predicts a decline from our current population (or a least a decline from the current trendline) - a doomsday argument.
Those who enjoy anthropic reasoning can take a moment to see what is misleading about that statement. Go on, do it.
Go on.
The Doomsday Argument and Self-Sampling Assumption are wrong, but induction is alive and well.
Since the Doomsday Argument still is discussed often on Less Wrong, I would like to call attention to my new, short, self-published e-book, The Longevity Argument, which is a much-revised and much-expanded work that began with my paper, “Past Longevity as Evidence for the Future,” in the January 2009 issue of Philosophy of Science. In my judgment, my work provides a definitive refutation of the Doomsday Argument, identifying two elementary errors in the argument.
The first elementary error is that the Doomsday Argument conflates total duration and future duration. Although the Doomsday Argument’s Bayesian formalism is stated in terms of total duration, all attempted real-life applications of the argument—with one exception, a derivation by Gott (1994, 108) of his delta t argument introduced in Gott 1993—actually plug in prior probabilities for future duration.
For example, Leslie (1996, 198–200) presents a Bayesian equation stated in terms of prior probabilities of total instances. But then Leslie (1996, 201–203) plugs into this equation prior probabilities for future instances: humans being born for the next 150 years vs. humans being born for the next many thousands of centuries. Bostrom (2002, 94–96) recounts Leslie’s general argument in terms of births instead of durations of time, using 200 billion total births vs. 200 trillion total births. (A closer parallel to Leslie 1996 would be 80 billion total births vs. 80 trillion total births.) But the error persists: the actual prior probabilities that are plugged in to Leslie’s Bayesian equation, based on all of the real-life risks actually considered by Leslie (1996, 1–153) and Bostrom (2002, 95), are of future births, not total births.
In other words, Leslie supposes a prior probability of doom within the next 150 years or roughly 20 billion births. (The prior probabilities supposed in the Doomsday Argument are prior to knowledge of one’s birth rank.) Leslie then assumes that—since there have already been, say, 60 billion births—this prior probability is equal to the prior probability that the total number of births will have been 80 billion births. However, in the absence of knowledge of one’s birth rank, this assumption is absurd.
The second elementary error is the Doomsday Argument’s use of the Self-Sampling Assumption, which is contradicted by the prior information in all attempts at real-life applications in the literature.
For example, many risks to the human race—including most if not all the real-life risks discussed by Leslie and Bostrom—can reasonably be described mathematically as Poisson processes. Then the Self-Sampling Assumption implies that the risk per birth—the ‘lambda’ in the Poisson formula—is constant throughout the duration of the human race. But Leslie (1996, 202) also supposes that if mankind survives past the next century and a half, then the risk per birth will drop dramatically, because mankind will begin spreading throughout the galaxy. (The Doomsday Argument implicitly relies on such a drop in lambda—and the resultant bifurcation of risk into ‘doom soon’ and ‘doom very much later’—for the argument’s significant claims.) In other words, Leslie’s prior probabilities of doom are mathematical contradictions of the Self-Sampling Assumption that Leslie and Bostrom invoke in the Doomsday Argument.
In my book, I perform Bayesian analyses that correct these errors. These analyses demonstrate that gaining more knowledge of the past can indeed update one’s assessment of the future; but this updating is consistent with common sense instead of with the Doomsday Argument. In short, while refuting the Doomsday Argument, I vindicate induction.
The price of my e-book is $4. However, professional scholars and educators are invited to email me to request a complimentary evaluation copy (not for further distribution, of course). I extend the same offer to the first ten Less Wrong members with a Karma Score of 100 or greater who email me. (I may send to more than ten, or to some with lower Karma Scores, but I don’t want to make an open-ended commitment.)
For an abstract of the e-book, see this entry on PhilPapers. For a non-technical introduction, see here on my blog.
The e-book covers much more than the Doomsday Argument; here is a one-sentence summary: The Doomsday Argument, Self-Sampling Assumption, and Self-Indication Assumption are wrong; Gott’s delta t argument (Gott 1993, 315–316; 1994) underestimates longevity, providing lower bounds on probabilities of longevity, and is equivalent to Laplace’s Rule of Succession (Laplace 1812, xii–xiii; [1825] 1995, 10–11); but Non-Parametric Predictive Inference based on the work of Hill (1968, 1988, 1993) and Coolen (1998, 2006) forms the basis of a calculus of induction.
References
Bostrom, Nick (2002), Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York & London: Routledge.
Coolen, Frank P.A. (1998), “Low Structure Imprecise Predictive Inference For Bayes' Problem”, Statistics & Probability Letters 36: 349–357.
——— (2006), On Probabilistic Safety Assessment in the Case of Zero Failures. Journal of Risk and Reliability 220 (Proceedings of the Institute of Mechanical Engineers O): 105–114.
Gott, J. Richard III (1993), “Implications of the Copernican Principle for our Future Prospects”, Nature 363: 315–319.
——— (1994), “Future Prospects Discussed”, Nature 368: 108.
Hill, Bruce M. (1968), “Posterior Distribution of Percentiles: Bayes' Theorem for Sampling from a Population”, Journal of the American Statistical Association 63: 677–691.
——— (1988), “De Finetti’s Theorem, Induction, and A(n) or Bayesian Nonparametric Predictive Inference”, Bayesian Statistics 3, Edited by Bernardo J.M., DeGroot, M.H., Lindley, D.V. & Smith A.F.M. Oxford: Oxford University Press: 211–241.
——— (1993), “Parametric Models for An: Splitting Processes and Mixtures”, Journal of the Royal Statistical Society B 55: 423–433.
Laplace, Pierre-Simon (1812), Theorie Analytique des Probabilités. Paris: Courcier.
——— ([1825] 1995), Philosophical Essay on Probabilities. Translated by Andrew I. Dale. Originally published as Essai philosophique sur les probabilite´s (Paris: Bachelier). New York: Springer-Verlag.
Leslie, John (1996), The End of the World: The Science and Ethics of Human Extinction. London: Routledge.
Here is and Addendum addressing the question by Manfred to elaborate on my statement, "the Self-Sampling Assumption implies that the risk per birth—the ‘lambda’ in the Poisson formula—is constant throughout the duration of the human race."
To avoid integrals, let me discuss a binomial process, which is a discrete version of a Poisson process.
Suppose you are studying a species from another planet. Suppose the only main risk to the species is an asteroid hitting the planet. Suppose the risk of an asteroid hit in a year is q. Given that the present moment is within a window (from the past through to the future) of N years without an asteroid hit, what is the probability P(Y) that the present moment is within year Y of that window?
P(Y) = [q(1 – q)Y(1 – q)N–Yq]/B, where B is the probability that the window is N years.
P(Y) = [q2(1 – q)N]/B.
Since Y does not appear in this formula, it is clear that P(Y) is constant for all Y. That is, since q is constant, P(Y) is uniform in [1, N], and P(Y) = 1/N. This result is equivalent to the Self-Sampling Assumption with units of time (years) as the reference class.
But suppose that the risk of an asteroid hit in the past was q, but the species has just built an asteroid destroyer, and the risk in the future is r where r << q. Then
P(Y) = [q(1 – q)Y(1 – r)N–Yr]/B.
[8/16/2011: Corrected the final 'r' in the above equation from a 'q'.] Y does appear in this formula. Clearly, the greater the value of Y, the smaller the value of P(Y). That is, contrary to the Self-Sampling Assumption, it is very likely that the present moment is in the early part of the window of N years.
The above argument demonstrates why the choice of ‘reference class’ matters. If the risk is constant per unit time, then the correct reference class is units of time. If the risk is constant per birth, then the correct reference class is births. Suppose birth rates increase exponentially. Then constant risk per unit time precludes constant risk per birth, and vice versa. The two reference classes cannot both be right. More generally, if the prior information stipulates that risk per birth is not constant, then the Self-Sampling Assumption using a reference class of births does not apply.
This passage is from my book (p. 59):
Here is a more philosophical and less mathematical perspective on the same point. SSA [the Self-Sampling Assumption] rests on the premise that all indexical information has been removed from the prior information. One's birth rank, which applies only to oneself, is such indexical information that is removed from the prior information before SSA is invoked. But even in the absence of birth rank, the prior information may—and usually does—include information that is indexical. For example, if the prior information states that λpast is large and λfuture is small, then the prior information is stating something that is true only of the present—namely, that the present is when λ changes abruptly from a large value to a small value. It turns out that this indexical information contradicts the mathematical conclusion of SSA. Moreover, this indexical information cannot be removed without consequence from the prior information, because the prior probabilities rest on it.
Perhaps the statement that Manfred quotes would have been clearer if I had instead written the following: The Self-Sampling Assumption implies that the risk per birth—the ‘lambda’ in the Poisson formula—is constant throughout the past and present.
Bayesian Doomsday Argument
First, if you don't already know it, Frequentist Doomsday Argument:
There's some number of total humans. There's a 95% chance that you come after the last 5%. There's been about 60 to 120 billion people so far, so there's a 95% chance that the total will be less than 1.2 to 2.4 trillion.
I've modified it to be Bayesian.
First, find the priors:
Do you think it's possible that the total number of sentients that have ever lived or will ever live is less than a googolplex? I'm not asking if you're certain, or even if you think it's likely. Is it more likely than one in infinity? I think it is too. This means that the prior must be normalizable.
If we take P(T=n) ∝ 1/n, where T is the total number of people, it can't be normalized, as 1/1 + 1/2 + 1/3 + ... is an infinite sum. If it decreases faster, it can at least be normalized. As such, we can use 1/n as an upper limit.
Of course, that's just the limit of the upper tail, so maybe that's not a very good argument. Here's another one:
We're not so much dealing with lives as life-years. Year is a pretty arbitrary measurement, so we'd expect the distribution to be pretty close for the majority of it if we used, say, days instead. This would require the 1/n distribution.
After that,
T = total number of people
U = number you are
P(T=n) ∝ 1/n
U = m
P(U=m|T=n) ∝ 1/n
P(T=n|U=m) = P(U=m|T=n) * P(T=n) / P(U=m)
= (1/n^2) / P(U=m)
P(T>n|U=m) = ∫P(T=n|U=m)dn
= (1/n) / P(U=m)
And to normalize:
P(T>m|U=m) = 1
= (1/m) / P(U=m)
m = 1/P(U=m)
P(T>n|U=m) = (1/n)*m
P(T>n|U=m) = m/n
So, the probability of there being a total of 1 trillion people total if there's been 100 billion so far is 1/10.
There's still a few issues with this. It assumes P(U=m|T=n) ∝ 1/n. This seems like it makes sense. If there's a million people, there's a one-in-a-million chance of being the 268,547th. But if there's also a trillion sentient animals, the chance of being the nth person won't change that much between a million and a billion people. There's a few ways I can amend this.
First: a = number of sentient animals. P(U=m|T=n) ∝ 1/(a+n). This would make the end result P(T>n|U=m) = (m+a)/(n+a).
Second: Just replace every mention of people with sentients.
Third: Take this as a prediction of the number of sentients who aren't humans who have lived so far.
The first would work well if we can find the number of sentient animals without knowing how many humans there will be. Assuming we don't take the time to terreform every planet we come across, this should work okay.
The second would work well if we did tereform every planet we came across.
The third seems a bit wierd. It gives a smaller answer than the other two. It gives a smaller answer than what you'd expect for animals alone. It does this because it combines it for a Doomsday Argument against animals being sentient. You can work that out separately. Just say T is the total number of humans, and U is the total number of animals. Unfortunately, you have to know the total number of humans to work out how many animals are sentient, and vice versa. As such, the combined argument may be more useful. It won't tell you how many of the denizens of planets we colonise will be animals, but I don't think it's actually possible to tell that.
One more thing, you have more information. You have a lifetime of evidence, some of which can be used in these predictions. The lifetime of humanity isn't obvious. We might make it to the heat death of the universe, or we might just kill each other off in a nuclear or biological war in a few decades. We also might be annihilated by a paperclipper somewhere in between. As such, I don't think the evidence that way is very strong.
The evidence for animals is stronger. Emotions aren't exclusively intelligent. It doesn't seem animals would have to be that intelligent to be sentient. Even so, how sure can you really be. This is much more subjective than the doomsday part, and the evidence against their sentience is staggering. I think so anyway, how many animals are there at different levels of intelligence?
Also, there's the priors for total human population so far. I've read estimates vary between 60 and 120 billion. I don't think a factor of two really matters too much for this discussion.
So, what can we use for these priors?
Another issue is that this is for all of space and time, not just Earth.
Consider that you're the mth person (or sentient) from the lineage of a given planet. l(m) is the number of planets with a lineage of at least m people. N is the total number of people ever, n is the number on the average planet, and p is the number of planets.
l(m)/N
=l(m)/(n*p)
=(l(m)/p)/n
l(m)/p is the portion of planets that made it this far. This increases with n, so this weakens my argument, but only to a limited extent. I'm not sure what that is, though. Instinct is that l(m)/p is 50% when m=n, but the mean is not the median. I'd expect a left-skew, which would make l(m)/p much lower than that. Even so, if you placed it at 0.01%, this would mean that it's a thousand times less likely at that value. This argument still takes it down orders of magnitude than what you'd think, so that's not really that significant.
Also, a back-of-the-envolope calculation:
Assume, against all odds, there are a trillion times as many sentient animals as humans, and we happen to be the humans. Also, assume humans only increase their own numbers, and they're at the top percentile for the populations you'd expect. Also, assume 100 billion humans so far.
n = 1,000,000,000,000 * 100,000,000,000 * 100
n = 10^12 * 10^11 * 10^2
n = 10^25
Here's more what I'd expect:
Humanity eventually puts up a satilite to collect solar energy. Once they do one, they might as well do another, until they have a dyson swarm. Assume 1% efficiency. Also, assume humans still use their whole bodies instead of being a brain in a vat. Finally, assume they get fed with 0.1% efficiency. And assume an 80-year lifetime.
n = solar luminosity * 1% / power of a human * 0.1% * lifetime of Sun / lifetime of human
n = 4 * 10^26 Watts * 0.01 / 100 Watts * 0.001 * 5,000,000,000 years / 80 years
n = 2.5 * 10^27
By the way, the value I used for power of a human is after the inefficiencies of digesting.
Even with assumptions that extreme, we couldn't use this planet to it's full potential. Granted, that requires mining pretty much the whole planet, but with a dyson sphere you can do that in a week, or two years with the efficiency I gave.
It actually works out to about 150 tons of Earth per person. How much do you need to get the elements to make a person?
Incidentally, I rewrote the article, so don't be surprised if some of the comments don't make sense.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)