Roko comments on Why safety is not safe - Less Wrong

48 Post author: rwallace 14 June 2009 05:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (97)

You are viewing a single comment's thread. Show more comments above.

Comment author: asciilifeform 14 June 2009 03:11:40PM *  3 points [-]

Would you have hidden it?

You cannot hide the truth forever. Nuclear weapons were an inevitable technology. Likewise, whether or not Eurisko was genuine, someone will eventually cobble together an AGI. Especially if Eurisko was genuine, and the task really is that easy. The fact that you seem persuaded of the possibility of Lenat having danced on the edge of creating hard takeoff gives me more interest than ever before in a re-implementation.

Reading "value is fragile" almost had me persuaded that blindly pursuing AGI is wrong, but shortly after, "Safety is not Safe" reverted me back to my usual position: stagnation is as real and immediate a threat as ever there was, vastly dwarfing any hypothetical existential risks from rogue AI.

For instance, bloat and out-of-control accidental complexity have essentially halted all basic progress in computer software. I believe that the lack of quality programming systems will lead (and may already have led) directly to stagnation in other fields, such as computational biology. The near-term future appears to resemble Windows Vista rather than HAL. Engelbart's Intelligence Amplification dream has been lost in the noise. I thus expect civilization to succumb to Natural Stupidity in the near term future, unless a drastic reversal in these trends takes place.

Comment deleted 14 June 2009 04:26:41PM [-]
Comment author: asciilifeform 14 June 2009 04:45:33PM *  1 point [-]

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes. Barring that, biological doomsday-weapon technology is becoming cheaper and will eventually be accessible to individuals. Unaugmented humans have proven themselves to be catastrophically stupid as a mass and continue in behaviors which logically lead to extinction. In the latter I include not only ecological mismanagement, but, for example, our continued failure to solve the protein folding problem, create countermeasures to nuclear weapons, and to create a universal weapon against virii. Not to mention our failure of the ultimate planetary IQ test - space colonization.

Comment author: hrishimittal 14 June 2009 05:36:54PM 0 points [-]

I am convinced that resource depletion is likely to lead to social collapse - possibly within our lifetimes.

What convinced you and how convinced are you?

Comment author: asciilifeform 14 June 2009 05:47:10PM *  2 points [-]

Dmitry Orlov, and very.

Comment author: cousin_it 14 June 2009 10:27:08PM *  7 points [-]

Oh. It might be too late, but as a Russian I feel obliged to warn you: when reading texts written by Russians, try to ignore the charm of darkness and depression. We are experts at this.

Comment deleted 14 June 2009 04:49:45PM [-]
Comment author: asciilifeform 14 June 2009 05:01:03PM *  -1 points [-]

How about thinking about ways to enhance human intelligence?

I agree entirely. It is just that I am quite pessimistic about the possibilities in that area. Pharmaceutical neurohacking appears to be capable of at best incremental improvements, often at substantial cost. Our best bet was probably computer-aided intelligence amplification, and it may be a lost dream.

If AGI even borders on being possible with known technology, I would like to build our successor race. Starting from scratch appeals to me greatly.

Comment deleted 14 June 2009 05:50:03PM [-]