brahmaneya comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: brahmaneya 14 May 2012 04:03:44AM *  1 point [-]

Anyway, it feels completely ridiculous to talk about it in the first place. There will never be a mind that can quickly and vastly improve itself and then invent all kinds of technological magic to wipe us out. Even most science fiction books avoid that because it sounds too implausible

Do you acknowledge that :

  1. We will some day make an AI that is at least as smart as humans?
  2. Humans do try to improve their intelligence (rationality/memory training being a weak example, cyborg research being a better example, and im pretty sure we will soon design physical augmentations to improve our intelligence)

If you acknowledge 1 and 2, then that implies there can (and probably will) be an AI that tries to improve itself

Comment author: jsteinhardt 15 May 2012 03:39:55AM 4 points [-]

I think you missed the "quickly and vastly" part as well as the "and then invent all kinds of technological magic to wipe us out". Note I still think XiXiDu is wrong to be as confident as he is (assuming "there will never" implies >90% certainty), but if you are going to engage with him then you should engage with his actual arguments.