You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Peterdjones comments on Muehlhauser-Wang Dialogue - Less Wrong Discussion

24 Post author: lukeprog 22 April 2012 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (284)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peterdjones 21 January 2013 11:49:07AM *  1 point [-]

What makes Wang think that this sort of fixed attitude - which can be made more hard-wired than the instincts of biological organisms - cannot manifest itself in an AGI?

Presumably the argument is something like:

  • You can't build an AI that is intelligent from the moment you switch it on: you have to train it.

  • We know how to train intelligence into humans, its called education

  • An AI that lacked human-style instincts and learning abilities at switch-on wouldn't be trainable by us, we just wouldn't know how, so it would never reach intelligence.