You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Estimating the probability of human extinction - Less Wrong Discussion

5 Post author: philosophytorres 17 February 2016 04:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

You are viewing a single comment's thread. Show more comments above.

Comment author: SoerenE 18 February 2016 08:14:58PM *  1 point [-]

Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly. Let us use the standard AI risk metric.

I feel that your sentence does refer to something: A hypothetical scenario. ("Godhood" should be replaced with "Superintelligence").

Is it correct that the sentence can be divided into these 4 claims?:

  1. An AI self-improves it's intelligence
  2. The self-improvement becomes recursive
  3. An AI reaches superintelligence through 1 and 2
  4. This can happen in a process that can be called "runaway"

Do you mean that one of the probabilities is extremely small? (E.g., p(4 | 1 and 2 and 3) = 0.02). Or do you mean that the statement is not well-formed? (E.g, Intelligence is poorly-defined by the AI Risk theory)

Comment author: TheAncientGeek 21 February 2016 05:40:51PM 1 point [-]

Some of the smarter (large, naval) landmines are arguably both intelligent and unfriendly.

But they are not arguably dangerous because they are intelligent.