Comment author: JoshuaZ 31 May 2012 05:10:30PM 0 points [-]

There were many other reasons to downvote your post, as discussed in a fair bit of detail in the comments.

Comment author: Bart119 31 May 2012 05:19:14PM *  0 points [-]

I understand that. I said it was OK. But I thought Spectral_Dragon in particular might be interested, flaws and all. My observation of derision of such concerns is not about my post, but many other places which I have seen when researching this.

Comment author: Spectral_Dragon 31 May 2012 12:15:37PM 2 points [-]

I don't, but I'm NOT going to stop thinking about something just because smarter beings are considering it as well.

The problem is that I'm worried we'll reach the point of having to choose between long lives/no artificial reduction in birth rates/an elite with an advantage and most people carrying on as usual, far too soon. We won't have progressed to superintelligence when this first becomes an issue. It might even be that we've not left the solar system by that time. And most people will definitely be more against letting a possible AI shape the fate of mankind than the thought of using science (It's a vague term, but I'm not sure what will increase our lived most first - tech, drugs or gene splicing).

So we might have to face this issue, even within our own lifetimes. While I still have only a crude understanding, I think it's possible we have to face something like this. At the very least, the amount of resources we have even without increased lifespans will pose problems.

Comment author: Bart119 31 May 2012 04:40:46PM -1 points [-]

I'm with you on thinking this is a serious issue. I also think the LW community has done a very poor job of dismissing all such concerns, often with derision. A post I made on the subject got downvoted into oblivion, which is OK (community standards and all). I accept some of the criticisms, but expect to bring the issue up again with them better addressed.

Comment author: Eliezer_Yudkowsky 31 May 2012 07:07:31AM 24 points [-]

If your civilization expands at a cubic rate through the universe, you can have one factor of linear growth for population (each couple of 2 has exactly 2 children when they're 20, then stop reproducing) and one factor of quadratic growth for minds (your mind can go as size N squared with time N). This can continue until the accelerating expansion of the universe places any other galaxies beyond our reach, at which point some unimaginably huge superintelligent minds will, billions of years later, have to face some unpleasant problems, assuming physics-as-we-know-it cannot be dodged, worked around, or exploited.

Meanwhile, PARTY ON THE OTHER SIDE OF THE MILKY WAY! WOO!

Comment author: Bart119 31 May 2012 04:30:17PM 2 points [-]

LW in general seems to favor a very far view. I'm trying to get used to that, and accept it on its own terms. But however useful it may be in itself, a gross mismatch between the farness of views which are taken to be relevant to each other is a problem.

It is widely accepted that spreading population beyond earth (especially in the sense of offloading significant portions of the population) is a development many hundreds of years in the future, right? A lot of extremely difficult challenges have to be overcome to make it feasible. (I for one don't think we'll ever spread much beyond earth; if it were feasible, earlier civilizations would already be here. It's a boring resolution to the Fermi paradox but I think by far the most plausible. But this is in parentheses for a reason).

Extending lifespans dramatically is far more plausible, and something that may happen within decades. If so, we will have to deal with hundreds or thousands of years of dramatically longer lifespans without galactic expansion as a relief of population pressures. It's not a real answer to a serious intermediate-term problem. Among other issues, such a world will set the context within which future developments that would lead to galactic expansion would take place.

The OP's point needs a better answer.

Comment author: Bart119 31 May 2012 04:03:22PM *  0 points [-]

I have no commitment to 'rational' in the sense OP wants to eliminate. But what shorthand might one use for "applying the sorts of principles that are the general consensus among the LW community, as best I understand them"?

Comment author: shminux 25 May 2012 06:33:28PM *  4 points [-]

Most of the issues you raise have been debated here and elsewhere many times, yet you did not provide a single link. Either you did not bother familiarizing yourself with the current state of them, or you think thet you are the first one to come up with such ideas. Or maybe this OP is just a rant. None of those is a particularly good thing, so you get my downvote.

Comment author: Bart119 25 May 2012 07:30:52PM 2 points [-]

OK. Forgive my modest research skills. I've certainly seen lots of posts that assume that indefinite lifespans are a good thing, but I had never seen any that made contrary claims or rebutted such claims. I would welcome pointers to the best such discussions. It was not intended as a rant.

Comment author: Bart119 25 May 2012 06:38:17PM 1 point [-]

Interesting. Downvoted into invisibility. Because of disagreement on conclusions, or form? I suppose an assertion of over-application of rationality is in a sense off-topic, but not in the most important sense. And of course no one has to accept the intuitions (which qualify as Bayesian estimates), but are they so far off they're not worth considering?

Over-applying rationality: Indefinite lifespans

-9 Bart119 25 May 2012 05:25PM

 

UPDATE: One commenter said that arguments against the desirability of indefinite lifespans and their rebuttal had appeared before on LW and elsewhere. I am very interested in links to the best such discussions. If I'm going over old ground, a kind soul who can point me to the prior art would be much appreciated.


 

-----------

 

I am very impressed with this site in its goal of outlining cognitive biases and seeing how they apply in everyday situations. When you're trying to decide how to spend money to alleviate human misery, it works. Yeah, it's better to save 50,000 people than 5,000. The two alternatives concern the same moral intuitions. When faced with a specific choice among alternatives, you may find that the tools of rationality will apply and tell you what to do, which might be contrary to what you would have done without such analysis.

 

But when I see people trying to use Bayesian analysis for bigger questions beyond this, I think there is a substantial danger of being led astray by the method. When you can't find a clear way to analyze the situation and you are making low-confidence probability estimates of alternative futures and their utility, you'd do better to just put your rationality toolbox back on the shelf and decide the way you've always decided: gut feelings, intuition, doing what everyone else does, etc.

 

Let's take as a case study the popular view on LW that living as long as possible is a good thing. First, within the range of currently common lifespans, it's a good thing to live a longer, healthier life; that is uncontroversial.

 

But judging from the LW posts I've read, the prospect that science could reach a point where people could live indefinitely long is hailed as a great and noble goal. I think it would be terrible.

 

First, let me distinguish an indefinite lifespan with true immortality. Is there anyone here who thinks true immortality is within reach? The sun will go red giant, making earth uninhabitable. If we hop from star to star, we get a little longer. But there's stuff like heat death and entropy and all. Not to mention the accumulation of small, mundane risks over a very long time. Eternity is one friggin' long time.

 

If you don't have true immortality, you have a longer lifespan, and then you die. You still have to face the same profoundly settling issue. One wry formulation might be: whenever you do die, you're saving yourself the trouble of dying later. Different lifespans all end with the same unsettling matter of personal extinction. (Other thought: mortality is the most salient and immediate roadblock to finding a more satisfying meaning in life, but it's just the first one; if it was removed we'd find others beyond.) If you live 500 years instead of 100, you haven't achieved anything special. You haven't cheated death. You've just got an extra 400 years of living. The mundane stuff of eating, sleeping, thinking, seeing beautiful sunsets, chatting with friends, etc., and of course the less pleasant parts too.

 

The ecological integrity of the world is already under severe strain. Perhaps with technological and political improvements we could improve how many people could live sustainably on earth by some constant factor, but it doesn’t affect the current argument. Our population is limited. (You may think we're going to personally take off to colonize the stars. Let's assume for now it can't be done.)

 

Given a population limit, the effect of people living 500 or 50,000 years is that the available slots will soon be filled, and reproduction would have to be seriously curtailed. Children would be very rare.

 

I think that no matter how healthy they are, a world full of people who are over 100 or 10,000 years old with very few children would be a place that 'just isn't right'.

 

First, I estimate that the human mind isn't designed to live beyond 100 years (if that) and will tend to become unhappy. Such people think the same thoughts over and over. They get bored (a lot of people today get bored at 50). They still know they're going to die someday.

 

Second, they live in a world without children. (One thing I've never seen in a LessWrong survey is the proportion who are parents -- given a highly educated group predominantly in their 20s, I would estimate it is very low).

 

And aside from their own personal boredom and personal lack of children in their lives, they know they live in a world where everyone else is in the same position. It's an ossifying world.

 

Now let's put rationality into it. I imagine a Bayesian feeling comfortable and at home constructing an equation with two key constants being the number of people and the number of happy, productive years they get to live. Multiplication is in order. I'm not sure how the argument goes after that: Potential future lives that don't happen don't get to add to the utility (do they? Or at a discounted rate?) Even if they do, the utility of a new life has to be weighed against the lost utility of an existing person dying. We can see the equation coming out in favor of extending life as long as possible.

 

The argument on the other side can also be framed in Bayesian terms. My estimation is that the utility of a large majority of these people who are over 200 years old is going to be very small. We can multiply their utility and conclude that the world will be a happier place with a mix of children, young people, and people croaking after 90 years of happy, productive life.

 

I imagine a Bayesian frowning at this analysis. It seems imprecise. I could I suppose assign some sort of utility-reduction weight to those various factors and multiply them out, but it isn't really going to make the Bayesian very happy. It’s not going to make me very happy either. I would rather just consider the situation as a whole and assign a low utility to the bulk of the population that's hundreds of years old rather than break it into parts.

 

At one level, my argument with the pro-indefinite-lifespan faction is just a difference in what kind of a future world would be a happier place. We've plugged in our different assumptions and reached different conclusions.

 

But to what extent does framing the problem as one of Bayesian analysis bias people to prefer the indefinite extension of individual lives? If your favorite tool is a hammer, things tend to look like nails. My conclusion feels more naturally framed if we ignore individual utilities and just say: a world full of people living indefinitely long would suck. Spelling it out in terms of utility just doesn't add anything.

 

The practical implications are a separate question. Killing people when they get to be 90 is of course highly repugnant, as is asking them to kill themselves. But it might affect what sort of scientific research we fund and what drugs we approve, for starters.

 

 

In response to comment by [deleted] on Shaving: Less Long
Comment author: Alejandro1 20 May 2012 03:38:06PM 0 points [-]

It varies with practice, so if Bart has tried wet shaving only once or twice, that's probably what it took him. In a recent trip I forgot my electric razor and had to do wet shaves for a few weeks, which I had never done before. The first time took 10 minutes, then it gradually decreased to less than 5.

Comment author: Bart119 20 May 2012 04:04:39PM 0 points [-]

My estimate was based on what I hear and read of others, not my own very limited experience.

Shaving: Less Long

13 Bart119 20 May 2012 02:52PM

OK, OK, it's not the weightiest of topics, and it's not rocket science. But I searched the site for "shaving" and "razor" and didn't see where it had been previously addressed.

I had a beard for nearly 30 years, but have been shaving again the last 6. I have always (since a brief experimental period in high school) used an electric razor for shaving. So did my daddy and his daddy before him, back through history.. wait, that can't be right. But my daddy and his daddy did, anyway.

I can shave with my electric in about 45 seconds, or maybe twice that if I'm trying to do a great job. What on earth do men see with wet shaves? Assuming they don't find the process inherently rewarding, the only argument I've heard is that you can get a closer shave. Which brings me to rationality.

Why does one want a close shave? Beard grows continuously throughout the day and night. Let's take as a guess that after two hours, beard growth will transform a very close wet shave into hair length immediately after an electric shave. Assuming it is the ratio of hair length that determines the relative utility of two different beard configurations, the advantage of the closer shave falls throughout the day. The ratio would be 2.00 after four hours, 1.50 after six hours, etc. If wet shaving takes something like 10 minutes, if desired one could do a second electric shave in the men's room late in the afternoon and come out with less stubble for the vast majority of the day with less total time invested. 

If there is some particular moment at which the least possible beard growth is desirable, for instance for a photo shoot, then I can see the advantage of the closest possible shave. A date is another possibility, though there is anecdotal evidence that some women prefer a hint of stubble to a smooth baby face.

But with those rare exceptions, the goal isn't to have zero stubble. It's to have stubble that's less long.

Similar arguments pertain to various sorts of housecleaning. Since whatever you're cleaning starts getting dirty again immediately, putting lots of effort into extraordinary levels of cleanliness seems to have little value unless you inherently value that moment of extraordinary cleanliness.

 

Comment author: Bart119 20 May 2012 01:37:25PM 2 points [-]

As I see it, once you accept the idea that we are just a dance of particles (as I do too), then in an important sense 'all bets are off'. A person comes up with something that works for them and goes with it. You don't have any really good reason not to become a serial murderer, and no good reason to save the world if you know how. So most of us (?) pick a set of values in line with human moral intuition and what other people pick and and just go back to living. It makes us happiest. I claim you can't be secretly miserable in an existential-angsty sort of way -- there is no deeper reality which supports that. There may be deeper realities we aren't seeing that we should worry about, but they are all within the scope of values we have chosen. But I've certainly had the experience that when I'm feeling bad I get reminded of the dance-of-particles situation and it further bums me out.

I see a decision about killing yourself as (in a way) constructing your future 'contentment curve' and seeing if the area above zero is larger than the area below. Rational people who get a painful terminal illness sometimes see lots of negative and that's where physician-assisted suicide comes in. This is subject to the enormous, hard-to-emphasize-enough cognitive distortion that badly depressed people are terrible at constructing future contentment curves. Then irrreversibility comes in as an argument, and the suggestion that a person should let others help them figure it out too.

View more: Prev | Next