You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MixedNuts comments on Charles Stross: Three arguments against the singularity - Less Wrong Discussion

10 Post author: ciphergoth 22 June 2011 09:52AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (54)

You are viewing a single comment's thread.

Comment author: MixedNuts 22 June 2011 11:18:27AM 7 points [-]

Summary:

  • Human-level or above AI is impossible because either they're going to be people, which would be bad (we don't want to have to give them rights, and it'd be wrong to kill them), or they're going to refuse to self-improve because they don't care about themselves.
  • Uploading is possible but will cause a religious war. Also, if there are sentient AIs around, they'll beat us.
  • It's unlikely we're in a simulation because why would anyone want to simulate us?

Pretty reasonable for someone who says "rapture of the nerds". The main problem is anthropomorphism; Stross should read up on optimization processes. There's no reason AIs have to care about themselves to value becoming smarter.

(I've never found a good argument for "AGI is unlikely in theory". It makes me sad, because Stross is looking at practical aspects of uploading, and I need more arguments for/against "AGI is unlikely in practice".)

Comment author: NancyLebovitz 22 June 2011 02:17:15PM 1 point [-]

In some sense, AIs will need to care about themselves-- otherwise they won't adequately keep from damaging themselves as they try to improve themselves, and they won't take measures to protect themselves from outside threats.

The alternative is that they care about their assigned goals, but unless there's some other agent which can achieve their goals better than they can, I don't see a practical difference between AIs taking care of themselves for the sake of the goal and taking care of themselves because that's an independent motivation.

Comment author: khafra 22 June 2011 12:38:07PM 0 points [-]

Sounds like he doesn't believe in the possibility of nonperson predicates.

Comment author: MixedNuts 22 June 2011 12:51:37PM 1 point [-]

No, it seems to be a different mistake. He thinks nonperson AIs are possible, but they will model themselves as... roughly, body parts of humans. So they won't optimize for anything, just obey explicit orders.