You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Gram_Stone comments on Steelmaning AI risk critiques - Less Wrong Discussion

26 Post author: Stuart_Armstrong 23 July 2015 10:01AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (98)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gram_Stone 29 July 2015 09:27:28PM 1 point [-]

There are parts that are different, but it seems worth mentioning that this is quite similar to certain forms of Bostrom's second-guessing arguments, as discussed in Chapter 14 of Superintelligence and in Technological Revolutions: Ethics and Policy in the Dark:

A related type of argument is that we ought—rather callously—to welcome small and medium-scale catastrophes on grounds that they make us aware of our vulnerabilities and spur us into taking precautions that reduce the probability of an existential catastrophe. The idea is that a small or medium-scale catastrophe acts like an inoculation, challenging civilization with a relatively survivable form of a threat and stimulating an immune response that readies the world to deal with the existential variety of the threat.

I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.

Comment author: [deleted] 03 August 2015 03:52:47AM 0 points [-]

I should mention that he does seem to be generally against attempting to manipulate people into doing the best thing.

Well that's actually quite refreshing.