You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on What is the best compact formalization of the argument for AI risk from fast takeoff? - Less Wrong Discussion

11 Post author: utilitymonster 13 March 2012 01:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread.

Comment author: lukeprog 13 March 2012 01:54:52AM *  6 points [-]

Good work.

Alternatively, one might construe the argument this way:

  1. There will be AI++ (before too long, absent defeaters). [See Chalmers.]
  2. If the goals of the AI++ differ significantly from the goals of human civilization, human civilization will be ruined soon after the arrival of AI++.
  3. Without a massive effort the goals of the AI++ will differ significantly from the goals of human civilization.
  4. Therefore, without a massive effort human civilization will be ruined soon after the arrival of AI++.

But this may be a less useful structure than the more detailed one you propose. My version simply packs more sub-arguments and discussion into each premise.

The premises (in your argument) that I feel least confident about are #1, #2, and #4.

Comment author: amcknight 14 March 2012 07:31:28AM *  0 points [-]

Premise #2 seems very likely to me. Can you provide me with reasons why it wouldn't be likely?

Comment author: lukeprog 14 March 2012 09:21:03AM 0 points [-]

Premise 2 in my version or utilitymonster's version?

Comment author: amcknight 14 March 2012 06:59:13PM 0 points [-]

Sorry, utilitymonster's version.