shminux comments on Thoughts on the Singularity Institute (SI) - Less Wrong

256 Post author: HoldenKarnofsky 11 May 2012 04:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1270)

You are viewing a single comment's thread. Show more comments above.

Comment author: shminux 11 May 2012 02:02:15AM 2 points [-]

What you presented there (and here) is another theorem, something that should be proved (and published, if it hasn't been yet). If true, this gives an estimate on how dangerous a non-agent AGI can be. And yes, since we have had a lot of time study people and no time at all to study AGI, I am guessing that an AGI is potentially much more dangerous, because so little is known. Or at least that seems to be the whole point of the goal of developing provably friendly AI.

Comment author: [deleted] 11 May 2012 08:32:38AM 0 points [-]

What you presented there (and here) is another theorem

What? It sounds like a common-sensical¹ statement about tools in general and human nature, but not at all like something which could feasibly be expressed in mathematical form.

Footnote:

  1. This doesn't mean it's necessarily true, though.