You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

loup-vaillant comments on against "AI risk" - Less Wrong Discussion

24 Post author: Wei_Dai 11 April 2012 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (89)

You are viewing a single comment's thread. Show more comments above.

Comment author: loup-vaillant 12 April 2012 10:51:25PM 2 points [-]

It depends on what you value. I see 3 situations:

  • Early Singularity. Everyone currently living is saved.
  • Late Singularity. Nearly everyone currently living dies anyway.
  • Very late Singularity, or "Semi-crush". everyone currently living dies, and most of our yet to be born descendants (up to the second renaissance) will die as well. There is a point however were everyone is saved.
  • Crush. Everyone will die, now and for ever. Plus, humanity dies with our sun.

If you most value those currently living, that's right, it doesn't make much difference. But if you care about the future of humanity itself, a Very Late Singularity isn't such a disaster.

Comment author: [deleted] 16 April 2012 04:33:27PM 3 points [-]

Now that I think about it, I care both about those currently living and about humanity itself, but with a small but non-zero discount rate (of the order of the reciprocal of the time humanity has existed so far). Also, I value humanity not only genetically but also memetically, so having people with human genome but Palaeolithic technocultural level surviving would be only slightly better for me than no-one surviving at all.