loqi comments on Let's reimplement EURISKO! - Less Wrong

19 Post author: cousin_it 11 June 2009 04:28PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: asciilifeform 16 June 2009 12:29:38AM *  -1 points [-]

ASCII - the onus is on you to give compelling arguments that the risks you are taking are worth it

Status quo bias, anyone?

I presently believe, not without justification, that we are headed for extinction-level disaster as things are; and that not populating the planet with the highest achievable intelligence is in itself an immediate existential risk. In fact, our current existence may well be looked back on as an unthinkably horrifying disaster by a superintelligent race (I'm thinking of Yudkowsky's Super-Happies.)

Comment author: loqi 16 June 2009 03:33:07AM 2 points [-]

Since your justification is omitted here, I'll go ahead and suspect it's at least as improbable as this one. The question isn't simply "do we need better technology to mitigate existential risk", it's "are the odds that technological suppression due to friendliness concerns wipes us out greater than the corresponding AGI risk".

If you assume friendliness is not a problem, AI is obviously a beneficial development. Is that really the major concern here? All this talk of the benefits of scientific and technological progress seems wasted. Take friendliness out of the picture, I doubt many here would disagree with the general point that progress mitigates long-term risk.

So please, be more specific. The argument "lack of progress contributes to existential risk" contains no new information. Either tell us why this risk is far greater than we suspect, or why AGI is less risky than we suspect.