Wei_Dai comments on Savulescu: "Genetically enhance humanity or face extinction" - Less Wrong

4 [deleted] 10 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (193)

You are viewing a single comment's thread. Show more comments above.

Comment author: billswift 10 January 2010 01:28:08PM 4 points [-]

From a thread http://esr.ibiblio.org/?p=1551#comments in Armed and Dangerous:

Andy Freeman Says: January 6th, 2010 at 1:11 am

There’s another factor. Regulation is systemic risk.

Indeed, I have made the argument on a Less Wrong thread about existential risk that the best available mitigation is libertarianism. Not just political, but social libertarianism, by which I meant a wide divergence of lifestyles; the social equivalent of genetic, behavioral dispersion.

The LW community, like most technocratic groups (eg, socialists), seems to have this belief that there is some perfect cure for any problem. But there isn’t always, in fact for most complex and social problems there isn’t. Besides the Hayek mentioned earlier, see Thomas Sowell’s “A Conflict of Visions”, its sequel “Vision of the Anointed”, and his expansion on Hayek’s essay “Knowledge and Decisions”.

There is no way to ensure humanity’s survival, but the centralizing tendency seems a good way to prevent its survival should the SHTF.

Comment author: Wei_Dai 10 January 2010 08:04:03PM 3 points [-]

Libertarianism decreases some types of existential risk and bad outcomes in general, but increases other types (like UFAI). It also seems to lead to Robin Hanson's ultra-competitive, malthusian scenario, which many of us would consider to be a dystopia.

Have you already considered these objections, and still think that more libertarianism is desirable at this point? If so, how do you propose to substantially nudge the future in the direction of more libertarianism?

Comment author: billswift 11 January 2010 03:49:14PM -1 points [-]

I think you misunderstand Robin's scenario; if we survive, the Malthusian scenario is inevitable after some point.

Comment author: orthonormal 12 January 2010 02:26:38AM 1 point [-]

Robin outright dismisses the possibility of a singleton (AI, groupmind or political entity) farsighted enough to steer clear of Malthusian scenarios until the universe runs down. I tend to think this dismissal is mistaken, but I could be convinced that there is a rough trichotomy of human futures: extinction, singleton or burning the cosmic commons.

Comment author: billswift 12 January 2010 09:26:38AM 5 points [-]

Of the three possibilities for the far future, the Malthusian scenario is the least bad. A singleton would be worse, and extinction worse yet. That doesn't mean I favor a Malthusian result, just that the alternatives are worse.

Comment author: Wei_Dai 14 January 2010 09:25:45AM 1 point [-]

I don't agree that there are only three non-negligible possibilities, but putting that aside, why do you think the Malthusian scenario would be better than a singleton? (I believe even Robin thinks that a singleton, if benevolent, would be better than the Malthusian scenario.)

Comment author: CarlShulman 12 January 2010 02:38:11AM 1 point [-]

He says that a singleton is unlikely but not negligibly so.

Comment author: orthonormal 12 January 2010 04:59:02AM 0 points [-]

Ah, I see that you are right. Thanks.