You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

daenerys comments on On "Friendly" Immortality - Less Wrong Discussion

5 [deleted] 05 December 2011 04:39AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (103)

You are viewing a single comment's thread.

Comment author: [deleted] 05 December 2011 04:43:13AM 0 points [-]

Unknown Consequences Discussion Thread

Comment author: Grognor 05 December 2011 05:19:05AM 17 points [-]

I'm going to bring up the classic Aubrey de Grey response, which actually works for all of these issues, but I think this one in particular. Yes, there will be problems if we live a much longer time. Yes, they will be very big problems. No, we don't even know what those problems will even be. But those problems will pale in comparison to the needless deaths of hundreds of thousands of people every day.

Comment author: MixedNuts 05 December 2011 07:44:12AM 11 points [-]

Yup, there's ginormous status quo bias going on here. If we lived in a world stretched to the limit for resources, where policy is impossible because everyone is a bigoted moral fossil and the few sane leaders left have no clue what they're doing because it's all so new, and you proposed "Hey, I know! Let's kill everyone over 80!"... everyone would just stare and ask "Have you been reading Pebble in the sky again?".

Comment author: [deleted] 05 December 2011 06:02:10AM -1 points [-]

I think that's the hand-waving the OP was talking about.

Comment author: Luke_A_Somers 05 December 2011 05:07:42PM 2 points [-]

How is this hand-waving?

Comment author: fortyeridania 05 December 2011 07:05:29AM 3 points [-]

If the consequences of life extension really are unknown, then they might be really good just as easily as they could be bad. Giving those up has a high opportunity cost. Without more information about the nature of these "unknown" consequences, they are not relevant (for non-risk-averse utility maximizers).

Comment author: [deleted] 05 December 2011 05:55:25AM *  2 points [-]

Reproductive rights deactivated at birth and restored pending parent certification, maybe? But even if we don't get nanotech or FAI, I bet we'll master biology via massively fast computers juggling the sequenced genes of all life on Earth. Combine this with much greater understanding of the brain, and a lot of stuff get demystified and de-romanticized*, and consequently optimized. Fewer things will be left up to chance, because we won't have to leave them to chance. This could be restricted solely to the developed world. I don't know.

Human nature is the elephant in the room. Will 10 billion un-adjusted humans accept that they can't ever expect to use resources like an average American circa 2005? Will we be able to update our stone age software en masse without FAI?

I have my doubts about space colonization unless we can create Earth-like environments.

*If anyone wants to attempt a "de-romanticized in 2050" list, I'd really love to see it.