asciilifeform comments on Why safety is not safe - Less Wrong

48 Post author: rwallace 14 June 2009 05:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: asciilifeform 14 June 2009 04:45:33PM *  1 point [-]

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test - space colonization.

Comment author: hrishimittal 14 June 2009 05:36:54PM 0 points [-]

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes.

What convinced you and how convinced are you?

Comment author: asciilifeform 14 June 2009 05:47:10PM *  2 points [-]

Dmitry Orlov, and very.

Comment author: cousin_it 14 June 2009 10:27:08PM *  7 points [-]

Oh. It might be too late, but as a Russian I feel obliged to warn you: when reading texts written by Russians, try to ignore the charm of darkness and depression. We are experts at this.

Comment deleted 14 June 2009 04:49:45PM [-]
Comment author: asciilifeform 14 June 2009 05:01:03PM *  -1 points [-]

How about thinking about ways to enhance human intelligence?

I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.

If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.

Comment deleted 14 June 2009 05:50:03PM [-]