I might need a better title (It has now been updated), but here goes, anyway:

I've been considering this for a while now. Suppose we reach a point where we can live for centuries, maybe even millenia, then how do we balance? Even assuming we're as efficient as possible, there's a limit for how much resources we can have, meaning an artificial limit at the amount of people that could exist at any given moment even if we explore what we can of the galaxy and use any avaliable resource. There would have to be roughly the same rate of births and deaths in a stable population.

How would this be achieved? Somehow limiting lifespan, or children, assuming it's available to a majority? Or would this lead to a genespliced, technologically augmented and essentially immortal elite that the poor, unaugmented ones would have no chance of measuring up to? I'm sorry if this has already been considered, I'm very uneducated on the topic. If it has, could someone maybe link an analysis of the topic of lifespans and the like?

New Comment
86 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

If your civilization expands at a cubic rate through the universe, you can have one factor of linear growth for population (each couple of 2 has exactly 2 children when they're 20, then stop reproducing) and one factor of quadratic growth for minds (your mind can go as size N squared with time N). This can continue until the accelerating expansion of the universe places any other galaxies beyond our reach, at which point some unimaginably huge superintelligent minds will, billions of years later, have to face some unpleasant problems, assuming physics-as-we-know-it cannot be dodged, worked around, or exploited.

Meanwhile, PARTY ON THE OTHER SIDE OF THE MILKY WAY! WOO!

This can continue until the accelerating expansion of the universe places any other galaxies beyond our reach

If dark energy is constant, and if no-one artificially moves more galaxies together, then after 100 billion years, all that's left in our Hubble volume is a merged Andromeda and Milky Way. On a supragalactic scale, the apparent abundance of the universe is somewhat illusory; all those galaxies we can see in the present are set up to fly apart so quickly that no-one gets to be emperor of more than a few of them at once.

It seems no-one has thought through the implications of this for intelligence in the universe. Intelligences may seek to migrate to naturally denser galactic clusters, though they then run the risk of competing with other migrants, depending on the frequency with which they arise in the universe. Intergalactic colonization is either about creating separate super-minds who will eventually pass completely beyond communication, or about trying to send some of the alien galactic mass back to the home galaxy, something which may require burning through the vast majority of the alien galaxy's mass-energy (e.g. to propel a few billion stars back to the home syste... (read more)

-2XiXiDu
Also see this timeline of the far future.
[-][anonymous]220

If your civilization expands at a cubic rate through the universe

You're picturing the far-future civilization as a ball, whose boundary is expanding at a constant rate. But I think a more plausible picture is a spherical shell. The resources at the center of the ball will be used up, and it will only be cost-effective to transport resources from the boundary inwards to a certain distance. If the dead inner boundary expands at the same rate as the live outer boundary, we'll be experiencing quadratic, not cubic growth.

You know, you're right. I will change my reply accordingly henceforth - linear population growth, linear increase in energy usage / computing power, and quadratic increase in (nonenergetically stored) memories.

6Armok_GoB
Don't you get some pretty nasty latency on accessing those memories?
0faul_sname
You get a linear increase in low-latency memory and a quadratic increase in high-latency memory.
0Armok_GoB
And a linear increase in the latency of the high latency memory.
8Shmi
says a poly...
3Bart119
LW in general seems to favor a very far view. I'm trying to get used to that, and accept it on its own terms. But however useful it may be in itself, a gross mismatch between the farness of views which are taken to be relevant to each other is a problem. It is widely accepted that spreading population beyond earth (especially in the sense of offloading significant portions of the population) is a development many hundreds of years in the future, right? A lot of extremely difficult challenges have to be overcome to make it feasible. (I for one don't think we'll ever spread much beyond earth; if it were feasible, earlier civilizations would already be here. It's a boring resolution to the Fermi paradox but I think by far the most plausible. But this is in parentheses for a reason). Extending lifespans dramatically is far more plausible, and something that may happen within decades. If so, we will have to deal with hundreds or thousands of years of dramatically longer lifespans without galactic expansion as a relief of population pressures. It's not a real answer to a serious intermediate-term problem. Among other issues, such a world will set the context within which future developments that would lead to galactic expansion would take place. The OP's point needs a better answer.
0RomeoStevens
offloading from earth becomes very easy when brains are instantiated on silicon.
1Spectral_Dragon
That all sounds rather well, but sort of lights warning lights in the back of my head - that sounds suspiciously like an Applause Light. My current issue would probably be regulation - it's possible this will all happen within a century. What if some superintelligent beings don't get to handle it in a few billion years? What if WE have to face it, with the less than rational people wanting to live as long as possible (We might want to too, but try to look over the consequences. Or I HOPE so at least), then what? I'm not asking what anyone else should do. I'm asking what WE should do in this situation. Assuming it first becomes available in developed countries to those that can pay for it, and gradually, which to me seems most likely, what happens and what can we do about it?
0Shmi
Due to my innate, if misguided, belief in the fair universe, I hope that everyone can get their own baby universe to nucleate at will. The mechanism has been proposed before: All these "unimaginably huge superintelligent minds" have to do is to control this process of bubbling.
0faul_sname
I see potential problems here, at least for any humans or social nonhumans..
0Shmi
I suspect that being a demiurge of your own universe can be pretty enticing.
-1jhuffman
This "mechanism" provides no facility for spontaneous generation of new matter and energy resources.
[-][anonymous]240

Seriously, why is this post being downvoted? It's a legit question and the OP isn't making any huge mistakes or drawing any stupid conclusions. He's just stating some confusion and asking for links.

I actually feel pretty mad about this.

Upvoted after seeing the comment. I thought about downvoting when I came to the thread and thought of doing so for a minute or three. The problem I had was the title's tone of summarizing once and for all what "the consequences of transhumanism are" and then doing the job really really poorly. I have a vague (but declining?) "my tribe"-feeling towards transhumanism and don't like seeing it bashed, or associated with straw-man-like arguments.

I think a title that avoided this inclination could have been something like "Is immortalist demography bleak?" or maybe "I fear very long lives lead to resources crunches and high gini coefficients" or you know... something specific and tentative rather than abstract and final. Basically, good microcontent.

One thing I've just had to get used to is that LWers are bad at voting. Comments I'm proud of are frequently ignored, and comments that I think are cheap tricks frequently get upvoted. Whatever people see first, right after an article will generally get upvoted much more than normal. Its not because quality comes first when sorting by that, because if you look at ancient posts where the sort o... (read more)

[-][anonymous]170

Comments I'm proud of are frequently ignored, and comments that I think are cheap tricks frequently get upvoted.

This. Very much this. Not a lot of my stuff gets upvoted. Yesterday I think, I had an existential crisis about it "oh my god, do I suck?". Yes that's stupid, but I often find it deeply disturbing that I am not a demigod.

LW is better than reddit, but yeah.

Another observation is that the downvoters seem to come out first. Posts (articles in discussion, specifically) that end up highly voted usually start out hovering around zero or going negative before rising. This post for example.

EDIT: Actually, I'd really like to see some graphs and stats on this from the LW database. Another thing to get more useful data is to allow people to cast a vote for which of their own comments they are most proud of, and see if this vote correlates with community vote.

6Spectral_Dragon
Thank you! That was what I was looking for in a title, I just couldn't seem to find the right words. I'll be editing the title in a minute. I also got pretty intimidated - within 10 minutes I'd lost about a fifth of my total karma and no one would tell me why. That seems to me another weakness - we are too quick to vote and seemingly not good enough at debating some topics and explaining WHY something deserves to be voted up/down.
2RobertLumley
I was very unhappy to see this downvoted as much as it was, although I thought it may have been because of something in the sequences I hadn't gotten to yet. But I especially try to avoid downvoting new people who are obviously making an effort, as you were. So I'm glad this corrected itself.
0[anonymous]
On rewarding effort, rather than results and quality: * * John Ringo, The Last Centurion, p.369
5steven0461
Letting go of the assumption that every user account's votes should have the same weight would probably go a long way. I'm not saying such a measure is called for right now; I'm just bringing it up to get people used to the idea if things get worse.
9RobertLumley
Letting go of the assumption that karma means much above -3 would also go a long way. Karma is just here really to keep trolls away. If there are vast differences in Karma scores posted from around the same time, then maybe that means something. I know personally that the comments and posts I am most proud of are, generally speaking, my least upvoted ones. To consider an example, this and this were posted around the same time, both to discussion. The former initially received vastly more karma than the second. But the former, while amusing, has virtually no content. The second is a well reasoned, well supported post. Did the former's superior karma mean that it was a better article? Obviously not. That's why the second was promoted and, once it was, eventually overtook the former. Another obvious example is the sequences. Probably everyone here would agree that at least 75 of the best 100 posts on LW are from the sequences. But, for the most part, they sit at around 10-20 karma. Those that are outside that are the extraordinarily popular ones, which are linked to a lot, and sit at probably around 40 karma. This is not an accurate reflection of their quality versus other articles that I see around 10-40 karma. I really try (but don't always succeed) to vote karma based on "Is this comment/post at a higher or lower karma score than I think it should have?". If everyone used this, then Karma scores might have some meaning relative to each other. But I don't think many people use this strategy, and the result is that karma scores are skewed towards more read and funnier posts. Which generally tend to be shorter and less substantial.
6Vladimir_Nesov
When a comment I make is not upvoted to at least +3, I give a moment's consideration to the question of what I did wrong (and delete some of the comments that fail this test).
0Will_Newsome
Some of your comments should be useful to the elite but not the masses. Such comments are only sometimes voted to +3. E.g., IIRC you regularly make decision theory comments that don't go to +3, so it seems you don't follow this rule even when talking about important things. (It's only semi-related, but who cares about the votes of the masses anyway? You're here to talk to PCs and potential PCs, which is less than 1% of the LessWrong population. You're beyond the point of rationality where you have to worry about not caring about NPCs becoming a categorical rule followed by everyone. On that note, you should care about the opinion of the churchgoer more, and the LessWronger less. Peace out comrade.)
1D2AEFEA1
Would it be difficult (and useful) to change the voting system inherited from reddit and implement one where casting a vote would rate something on a scale from minus ten to ten, and then average all votes together?
3Emile
The biggest problem wouldn't be technical, it would be the lower usability and the increased focus put on karma. Also, averaging would be bad, showing the median vote would reduce the appeal of always putting +10/-10.
0RobertLumley
Difficult? Probably not. Useful is debatable. I'm not sure that the Karma system is important enough to consider in much detail. I just don't see much low hanging fruit there.
0A1987dM
So do I.
0TheOtherDave
I wonder how hard it would be to build a LW addon (like the antikibbitzer) that replaced numeric readouts with a tier label (e.g. "A" >=10, "F" for <=-3, etc.), and how using that would affect my experience of LW.
0RobertLumley
I think that would be pretty awkward, since posts would start in the "C" range. I think most people here would consider getting a "C" bad. But tiers make for an interesting concept, if you move away from grades.
0Emile
Use the fact that the date is also displayed to give a "?" instead for posts less than a day old, or have thresholds that move a bit for the first couple days.
0TheOtherDave
Sure; the specific tier thresholds are secondary and might even be user-definable parameters, so people don't need to know what tier they're in on my screen, if knowing that would make them feel bad.
0D2AEFEA1
I would second that. On the other hand, how would you decide what weight to give to someone's vote? Newcomers vs older members? Low vs high karma? I'm not sure a function of both these variables would be sufficient to determine meaningful voting weights (that is, I'm not sure such a simple mechanism would be able to intelligently steer more karma towards good quality posts even if they were hidden, obscure or too subtle).
0evand
What if the site just defaulted to a random sort order, so different people are presented with different comments first? That would still tend to bias in favor of older comments getting high presentation rank more. I'm not sure that's such a bad thing, though.
3jhuffman
I still find myself tempted to make fun of people who are just today learning the lesson of that comic - e.g. those original down-voters.

I actually feel pretty mad about this.

This question shouldn't be being downvoted as much as it is- it is a legitimate question although would probably go better in its own set in the open thread rather than a discussion section.

Yes, this has been discussed a fair bit- the main argument in most transhumanist circles when this comes up is that everyone will get the benefits and that birth rates will go down accordingly (possibly by enforcement). In that regard, there's a fair bit of data that humans birth rates go down naturally as lifespan goes up. There are other responses but this is the most common. It is important to realize that it is unlikely that this issue will need to be seriously addressed for a long way off.

8RobertLumley
The volume of comments generated indicates to me that it is too large for an open thread.
6[anonymous]
Stinky motivated stopping. Birth rates aren't the only problem. Personal growth causes the same problem. Agree about not actually needing to address this yet. our future selves will be much smarter and still have plenty of time.
2Dolores1984
Speed of light latency puts limits on the maximum size of a brain. Or, rather, it enforces a relationship between the speed of operation of a brain and its size. At a certain point, making a sphere of computronium bigger no longer makes it more effective, since it needs to talk to working components that are physically distant. Granted, it's not a small limit, especially if people want a bunch of copies of themselves.
0Spectral_Dragon
Good point. Should I move it? If so, I don't know how. I'd really like to see anyone here who really knows what they're talking about (I don't, for example, but I want to know) discuss it here. Currently looking for both a plausible situation and solution.
[-]knb100

A lot of people seem to be shrugging this question off, saying basically, "Transhuman minds are ineffable, we can't imagine what they would do." If we have some kind of AI god that rapidly takes over the world after a hard takeoff, then I think that logic basically applies. The world after that point will reflect the values implemented into the AI god.

Robin Hanson has described a different scenario, that I take somewhat more seriously than the AI god scenario.

This long competition has not selected a few idle gods using vast powers to indulge arbitrary whims, or solar system-sized Matrioshka Brains. Instead, frontier life is as hard and tough as the lives of most wild plants and animals nowadays, and of most subsistence-level humans through history. These hard-life descendants, whom we will call “colonists,” can still find value, nobility, and happiness in their lives. For them, their tough competitive life is just the “way things are” and have long been.

Hanson is describing a return to the Malthusian condition that has defined life since the beginning. The assumptions seem fairly strong to me:

  1. Competition won't cease.
  2. The darwinian drive to reproduce will remain (be
... (read more)
[-]Shmi90

This reminds me. An interesting question is, assuming constant mass/person, how long until the speed of light becomes a limiting factor? I.e. given a fixed growth rate, at what total population would the colonization speed be approaching the speed of light just to keep the # humans per cubic parsec of space constant? It is clear that this will happen at some point, given the assumptions of constant birth rate and constant body mass, because the volume of colonized space only grows as time cubed, while the population grows exponentially.

Here is a back-of-... (read more)

Other physical angles:

If the economy continues to grow at roughly the present rate, using more energy as it does so, when will we be consuming the entire solar energy output each year? And if this energy growth happens on the surface of the earth and heat dissipation works in a naive way then how long till the surface of the earth is as hot as the sun? Answers: A bit less than 1400 years from now to be eating the sun, and a bit less than 1000 years from now till Earth's surface is equally hot, respectively. Blog post citation!

The same blogger did a followup post on the possibility of economic growth that "virtualizes our values" (my terminology, not the blogger's, he calls it "decoupling") so that humanity gets gazillions of dollars worth of something while energy use is fixed by fiat in the model. Necessarily the "fluffy stuff" (his term) somehow takes over the economy such that things like food are negligible to economic activity. With 5% "total economy" growth and up-to-an-asymptote energy growth, by 2080 98% percent of the value of the economy is made of of "fluffy stuff" which seems to imply that real world food and real w... (read more)

4steven0461
See an earlier discussion for some more criticism of the blogger's claim.
1mwengler
Ems or other more efficient versions of living intelligence just put off the exponential malthusian day of reckoning by 100 years or a 1000 years or 10000 years. As long as you have reproducing life, its population will tend to or "want to" grow exponentially, while with technical improvements, I can't think of a reason in the world to expect them to be exponential. I also wonder at what point speciation becomes inevitable or else extremely likely. Presumably in a world of ems with 10^N more ems than we now have people, and very fast em-thinking speeds restricting their "coherence length" (the distance over which they have significant communication with other ems within some unit of time meaningful to them) of perhaps 10s of km, we would it seems have something like 10^M civilizations averaging 10^(N-M) as complex as our current global civilization, with population size standing in as a rough measure of complexity. Whether ems want to compete or not, at some point you will have slightly more successful or aggressive large civilizations butting up against each other for resources. In the long run, I think, exponentials dominate. This is the lesson on compound interest I take from Warren Buffett. Further, one of the lesson's I take from Matt Ridley's "Rational Optimist" is that the Malthusian limit is the rule and the last 2 centuries saw us nearly hitting it a few times, with something like the "Green Revolution" coming along in a "just in time" fashion to avoid it. Between what Hanson has to say and what Ridley has to say, and what Buffett has to say (about compound interest i.e. exponentials), it sure seems likely that in the long run Malthus is the rule and our last one or two centuries have been a transition period between Malthusian equilibria.

The fertility rate is far more important than mortality. You can calculate for yourself that even if humans were immortal, an average fertility below 1 child per parent does not lead to exponential population growth, while average fertility above 1 child per parent means exponential growth even if people keep dying at seventy.

What makes you think we have meaningful opinions to share on the options available to beings that are pushing the carrying capacity of the galaxy?

2Spectral_Dragon
I don't, but I'm NOT going to stop thinking about something just because smarter beings are considering it as well. The problem is that I'm worried we'll reach the point of having to choose between long lives/no artificial reduction in birth rates/an elite with an advantage and most people carrying on as usual, far too soon. We won't have progressed to superintelligence when this first becomes an issue. It might even be that we've not left the solar system by that time. And most people will definitely be more against letting a possible AI shape the fate of mankind than the thought of using science (It's a vague term, but I'm not sure what will increase our lived most first - tech, drugs or gene splicing). So we might have to face this issue, even within our own lifetimes. While I still have only a crude understanding, I think it's possible we have to face something like this. At the very least, the amount of resources we have even without increased lifespans will pose problems.
-3Bart119
I'm with you on thinking this is a serious issue. I also think the LW community has done a very poor job of dismissing all such concerns, often with derision. A post I made on the subject got downvoted into oblivion, which is OK (community standards and all). I accept some of the criticisms, but expect to bring the issue up again with them better addressed.
0JoshuaZ
There were many other reasons to downvote your post, as discussed in a fair bit of detail in the comments.
0Bart119
I understand that. I said it was OK. But I thought Spectral_Dragon in particular might be interested, flaws and all. My observation of derision of such concerns is not about my post, but many other places which I have seen when researching this.
0Spectral_Dragon
It's interesting, but doesn't cover the points I'm most concerned about - within a century, it's likely this will become a problem, birth/death have to be regulated. And given not everyone is rational... How do we do it? Cost, promising not to have kids, or what? Also, I agree that the human mind might not function at optimum efficiency that long. It's a side point, and can probably be fixed, but... We're NOT adapted to live more than a few millenia at best. Maybe even a few centuries. Though this is only speculation.
[-][anonymous]70

I think It's a biggish problem to solve.

The first thing that makes it easier is that I see no reason we ought to increase the number of people very much. I think N billion is enough of a party already. Once we all know each other, maybe we can talk about making more. (for the people who are still alive)

Of course that doesn't make the problem go away. If we want to grow as people continuously, we will eventually hit limits. Especially since we might decide that exponential growth is the only acceptable thing.

We might have to accept that there is a non-infin... (read more)

0mwengler
If you are part of a group that wants to grow slowly and there are even two intelligences out there who are not on board with your program, your group will have to have a CEV which kills off the dissenters. Otherwise, "compound interest" growth rates of the dissenters will turn your slow-growers into a footnote, a vanishingly small "also-ran" in the evolution of transhumanists.
1[anonymous]
This is why we need FAI... There are much better solutions to group disagreements than murder.
-1mwengler
Sure, alternatives to murder. But are there alternatives to "kill off?" How do you beat a population which consistently out-reproduces your population? Either you make their growth rate < their reproduction rate, or you lose. As much smarter as an FAI might be than is a UNI (Unfriendly Natural Intelligence), the laws of compound interest would apply to both.
1wedrifid
Confine them in a finite space. Wait.
2mwengler
How do you beat a population which has all the capabilities of your population, EXCEPT they out-reproduce your population? "Confine them in a finite space. Wait." You assume here that your population has advantages over the population you are combining. Your population has the power, the additional intellect, the grandfathered-in control of more resources, perhaps a persistent AND ENFORCED information advantage over the other population. Without a significant asymmetry in your population's favor, when you combine them in a finite space, their population grows beyond yours, they now have the motivation and the ability to beat your population, to resolve whatever resource or information asymmetry in their favor that has previously allowed your population to dominate them. We are not talking about humans vs cockroaches here. Or even if we are, the answer is the same. If humans became threatened by cockroach populations, we would (and do) simply kill them. We also come up with more clever ways of controlling their population, we introduce things into their environment which bring their population growth rate way down, even negative (i.e. we kill them?). But transhumans A vs transhumans B, where A has decided to reproduce slowly? Unless B makes some correspondingly self-limiting decision, all other things being equal, population A will eventually dominate. As I write this I do realize my claim that population A would need to be willing to kill defectors, to kill people who were obviously population B. I was also assuming an asymmetry: that population A would be willing to kill B and B not willing, or able, to kill A. The scenario I was thinking of was that the slow-reproducers at least for now have the population advantage and therefore the power advantage. My underlying idea then would be that IF you propose a slow-growth policy for Transhumans, AND you wish to have this proposal survive for a long time, THEN you must PREVENT a significant population of transhuman
1mwengler
This is just a version of "kill them" isn't it?

It is unlikely that society will ever neatly divide into "haves" and "have nots"- be suspicious of sharp divisions. There should be lots of boundary cases, and probably a smooth gradient.

The primary question for answering these sorts of questions, I think, is whether modification to existing agents is more or less effective than modification to new agents. It seems more likely to me that genetic engineering can radically increase human lifespans than interventions in already developed humans- and if that's the case, then there won't rea... (read more)

My essay on this (go to the original article to see the hyperclicks to some of the references, as I'm too lazy to copy them here)

One of the most common objections against the prospect of radical life extension (RLE) is that of overpopulation. Suppose everyone got to enjoy from an eternal physical youth, free from age-related decay. No doubt people would want to have children regardless. With far more births than deaths, wouldn't the Earth quickly become overpopulated?

There are at least two possible ways of avoiding this fate. The first is simply having c

... (read more)
2mwengler
The long-lived slowly reproducing transhumans have to be willing to kill off "dissenters," long-lived transhumans who reproduce at a faster rate, or else they will be a footnote in our evolution, the Neanderthals or also-rans on the path to whereever we get. Either you out reproduce or you kill your competitors, or you lose. It seems to me a lot of future scenarios here depend on a kind of top-down imposed control and uniformity you just don't see among intelligent competitors. It only takes a small number of escapees from the control who pick a strategy that eats the lunch of the top-down controlled group to bring that whole thing to an end in finite time. Whatever you propose has to be intensively successful against conceivable variations. Long-lived slow-reproducers MAY be, if they are ruthless against other intelligences that are not so measured as they in their reproduction. Are you ready for that?
0Kaj_Sotala
If some people wish to have lots of children and are willing to endure having an otherwise lower standard of living because of that, that's fine by me. So far birth rates haven't been going down because of top-down control, but because of people adapting to changed economic conditions.
4knb
This strikes me as very naive. Birthrates have been declining for a few decades (in some countries) and you're trying to extrapolate this trend out into the distant future. Meanwhile there are already developed countries that buck this trend. Qatar is richer than any European country, and Qataris have 4 kids per woman. Imagine you had a petri dish of bacteria, and introduce a chemical that stops reproduction of 98% of the types of bacteria inside it, 2% of the bacteria have resistance to this chemical. What you would see is a period of slowing growth, as most of the bacteria stop reproducing. Then there would be a period of decline in the bacteria population, as the non-reproducing bacteria start dying off. Finally, there would be a return to exponential growth, as the 2% fill the petri dish left empty by the sterilized bacteria. All that is necessary is for a small percentage to be immune and to be able to pass their immunity to their children with some consistency.
0Kaj_Sotala
Well, Qatar's birthrate is in a decline, too. But that's beside the point, since I don't actually disagree with you. Both your comment and mwengler's reply strike me as arguing a different point from what I was making in the essay. I was primarily trying to say that life extension won't cause an immediate economic disaster - that yes, although it will impact global demographics, it will take many decades (maybe centuries) for the impact of those changes to propagate through society, which is plenty of time for our economy to adjust. Dealing with gradual change isn't a problem, dealing with sudden and unanticipated shocks is. We've successfully dealt with many such gradual changes before. In contrast, you two seem to be making the Malthusian argument that in the long term, any population will expand until it reaches the maximum capacity of the environment and each individual makes no more than a subsistence living. And I agree with that, but that has little to do with life extension, since the very same logic would apply regardless of whether life extension was ever invented. Yes, life extension may have the effect that we'll hit population and resource limits faster, but we'd eventually run into them anyway. The main question is whether life extension would accelerate the expansion towards Malthusian limits enough to make the transition period much more painful than what it would otherwise be, and whether that added pain would outweigh the massive reductions in human suffering that age-related decline causes - and I don't see a reason to presume that it would.
3knb
I know this is tangential, but I want to point out why the statistic you used is deceptive. Qatar has a huge foreign population (80% of the population) with much lower birthrates than the native Qataris. Four kids/ woman for Qataris, and 2 kids / woman for resident foreigners. So the decline in birth rates is mainly caused by 2 factors related to immigration. The first is that the resident foreigners have relatively fewer women (most migrant workers are males) and therefore lower the "births per 10,000". Second, the women that do immigrate in have far lower birth rates than Qataris. The more important element here that I disagree with is this: There are externalities here. When people make lots of kids, it doesn't just crowd out resources for their parents. It crowds them out for everybody. At some point, more kids means higher prices (or, in a command economy, smaller rations) for everyone else. I am somewhat sympathetic to the Hansonian sentiment that having a huge number of poor people is better than having a tiny number of idle gods, and that poor people can be happy. But I do flinch away from the idea that human-level minds should be like dandelion seeds for profligate, reproduction-obsessed future ems.
0Kaj_Sotala
Ah. Thanks for the correction. At some point in the far future, yes. But for now, more kids are AFAIK considered to have positive externalities, and barring uploading or the Singularity that looks to be the case for at least a couple of hundred years. (Of course, discussing developments a couple of hundred years in the future while making the assumption that we'll remain as basically biological seems kinda silly, but there you have it.)

I decided a while back that the next time there's a LW census, I'm going to suggest adding a question like this: "Ceteris NOT being paribus, if you could make all the people in the world invulnerable to diseases, cellular degeneration and similar aging-related problems by pushing a button, would you push it?" I would be very interested in those results.

How would this be achieved? Somehow limiting lifespan, or children, assuming it's available to a majority? Or would this lead to a genespliced, technologically augmented and essentially immortal

... (read more)
0DanArmak
I would push this button, and I predict so would almost everyone else here. (So I'm interested in hearing why someone wouldn't.) Reasons: 1. This is the only immediately available way to make myself and my loved ones immortal. Compared to that gain, also making everyone else immortal is much less important, whatever the eventual results. This is sufficient reason in itself. 2. Population has been limited in the past by Malthusian limits and might be again. But Malthusian limits don't mean fewer people are born. They mean just as many are born, but most of them starve to death when young (simplifying). Making people immortal wouldn't change that basic behavior. What would change it is making people richer and less religious - each of these is very strongly correlated with number of children. Incidentally, making people biologically immortal would help greatly to wean them from the allegiance of anti-science, anti-progress, and religion. 3. All other proposed negative effects of making everyone immortal are mostly speculation. It's as easy to speculate on positive effects. E.g., "people care more about the future and live longer so they become wiser - and so they help fix this problem and institute licensing to have children".
0Gastogh
The reason why perhaps not push the button: unforeseeable (?) unintended consequences. I expect point number 1 would weigh heavily in anyone's mind when making the choice, but it might turn out to be a harmfully biased option, assuming it even works. As to point two: in the absence of diseases and aging, the population would hit its limits along some other front. Starvation is only the obvious end of the line; the catch is what we might expect to see on the way there, such as rising global tensions, civil unrest, wars (gloves off or otherwise), accelerated environmental decay - all the things that may not seem like such pressing problems now. We could with perfect seriousness ask the question whether the current state of affairs isn't safer for humanity at large than after pressing the button. (I'll confess: I would use people's answers to the original question mostly as a proxy measurement for their general optimism.) Frankly, I'd argue the exact reverse of point 3. IMO, it takes heavy speculation to avoid any of the risks I mentioned, and speculating on the positive effects is what seems questionable. The only immediate species-wide benefit would be that world-class expertise on all fields of science suddenly stops pouring out of the world at a steady pace. Anything like "people will care more about the future" supposes fairly fundamental changes in how people think and behave. I expect birth control regulations would be passed, but would you expect to see them work? How would you expect to see them enforced? My guess is: not in worldwide peace and mutual harmony.
0TheOtherDave
Are you also considering the unforeseen unintended consequences of not pushing the button and concluding that they are preferable? (If so, can you clarify on what basis?) Without that, it seems to me that uncertainty about the future is just as much a reason to push as to not-push, and therefore neither decision can be justified based on such uncertainty.
0Gastogh
Yes. It's not that the world-as-is is a paradise and we shouldn't do anything to change it, but pushing the button seems like it would rock the boat far more than not pushing it. Where by "rock the boat" I mean "significantly increase the risk of toppling the civilization (and possibly the species, and possibly the planet) in exchange for the short-term warm fuzzies of having fewer people die in the first few years following our decision." Uncertainty being just as much a reason to push as not-push seems like another way of saying we might as well flip a coin, which doesn't seem right. Now, I'm not claiming to be running some kind of oracle-like future extrapolation algorithm where I'm confident in saying "catastrophe X will break out in place Y at time Z", but assuming that making the best possible choice gets higher priority than avoiding personal liability, the stakes in this question are high enough that we should choose something. Something more than a coin flip.
2TheOtherDave
If uncertainty is just as much a reason to push as not-push, that doesn't preclude having reasons other than uncertainty to choose one over the other which are better than a coin flip. The question becomes, what reasons ought those be? That said, if you believe that pushing the button creates greater risk of toppling civilization than not-pushing it, great, that's an excellent reason to not-push the button. But what you have described is not uncertainty, it is confidence in a proposition for as-yet-undisclosed reasons.
0Gastogh
I'm starting to feel I don't know what's being meant by uncertainty here. It is not, to me, a reason in and of itself either way - to push the button or not. And not being a reason to do one thing or another, I find myself confused at the idea of looking for "reasons other than uncertainty". (Or did I misunderstand that part of your post?) For me it's just a thing I have to reason in the presence of, a fault line to be aware of and to be minimized to the best of my ability when making predictions. For the other point, here's some direct disclosure about why I think what I think: * There's plenty of historical precedent for conflict over resources, and a biological immortality pill/button would do nothing to fix the underlying causes behind that phenomenon. One notable source of trouble would be the non-negligible desire people have to produce offspring. So, assuming no fundamental, species-wide changes in how people behave, if there were to be a significant drop in the global death rate, population would spike and resources would rapidly grow scarcer, leading to increased tensions, more and bloodier conflicts, accelerated erosion, etc. * To avoid the previous point, the newfound immortality would need to be balanced out by some other means. Restrictions on people's rights to breed would be difficult to sell to the public and equally difficult to enforce. Again, it seems to me that the expectation that such restrictions would be policed successfully assumes more than the expectation for those restrictions to fail. Am I misusing the Razor when I use it to back these claims?
0TheOtherDave
Perhaps I confused the issue by introducing the word "uncertainty." I'm happy to drop that word. You started out by saying "The reason why perhaps not push the button: unforeseeable (?) unintended consequences." My point is that there are unforeseen unintended consequences both to pushing and not-pushing the button, and therefore the existence of those consequences is not a reason to do either. You are now arguing, instead, that the reason to not-push the button is that the expected consequences of pushing it are poor. You don't actually say that they are worse than the expected consequences of not-pushing it are better, but if you believe that as well, then (as I said above) that's an excellent reason to not-push the button. It's just a different reason than you started out citing.
[-][anonymous]10

The novels Red Mars, Green Mars and Blue Mars by Kim Stanley Robinson bring up this question and proposes an answer, offered here in a simple rot13.com cypher for those who haven't read the books yet.

Gubfr jub erprvir gur ybatrivgl gerngzrag ner fgrevyvmrq ng gur fnzr gvzr.

2[anonymous]
In itself doesn't help at all if people can reproduce before.
0[anonymous]
That is also addressed in the novels, something I might have added previously.
0billswift
That is not exactly a new idea, many stories have used that. And Harry Stine's A Matter of Metalaw had that as an automatic consequence of the immortality treatments.

To me, the real turning point is if and when we learn how to precisely control our personalities - in short, reengineering human nature itself. Of course there's the nature vs nurture matter in this, not to mention all the potential factors than even go into a personality, let alone alter it. But I'm 100% against uncontrolled transhumanism, or even mere unregulated genetic modification or augmentation.

Though, let's suppose there was a way to correct obviously harmful behaviorial defects with at least a partial genetic basis, particularly behavior every so... (read more)

1mwengler
The dissonance is between the modifications you would like to see and the modifications which will dominate. Even if 99.999% wants to see a kinder, gentler, less psychopathic human, if there is one a-hole in the bunch who turns up psychopathic agression and reproduction drive in such a way that the resulting creature does pretty well, his result will dominate. I would bet that personalities that will not kill off the other creatures who are genetically dangerous to them will never, over time, be on the winning side.
1billswift
Not dominate, but force a mixed strategy; as I pointed out in another comment last week: In game theory, whether social or evolutionary, a stable outcome usually (I'm tempted to say almost always) includes some level of cheaters/defectors. Which requires the majority to have some means of dealing with them when they are encountered.