army1987 comments on against "AI risk" - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (89)
On what timescale?
I find the focus on x-risks as defined by Bostrom (those from which Earth-originating intelligent life will never, ever recover) way too narrow. A situation in which 99% of humanity dies and the rest reverts to hunting and gathering for a few millennia before recovering wouldn't look much brighter than that -- let alone one in which humanity goes extinct but in (say) a hundred million years the descendants of (say) elephants create a new civilization. In particular, I can't see why we would prefer the latter to (say) a civilization emerging on Alpha Centauri -- so per the principle of charity I'll just pretend that instead of “Earth-originating intelligent life” he had said “descendants of present-day humans”.
It depends on what you value. I see 3 situations:
If you most value those currently living, that's right, it doesn't make much difference. But if you care about the future of humanity itself, a Very Late Singularity isn't such a disaster.
Now that I think about it, I care both about those currently living and about humanity itself, but with a small but non-zero discount rate (of the order of the reciprocal of the time humanity has existed so far). Also, I value humanity not only genetically but also memetically, so having people with human genome but Palaeolithic technocultural level surviving would be only slightly better for me than no-one surviving at all.