You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on SSC Discussion: No Time Like The Present For AI Safety Work - Less Wrong Discussion

6 Post author: tog 05 June 2015 02:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: knb 07 June 2015 06:52:36AM *  4 points [-]

I think Scott's argument is totally reasonable, well-stated and I agree with his conclusion. So it was pretty dismaying to see how many of his commenters are dismissing the argument completely, making arguments which were demolished in Eliezer's OB sequences.

Some familiar arguments I saw in the comments:

  1. Intelligence, like, isn't even real, man.
  2. If a machine is smarter than humans, it has every right to destroy us.
  3. This is weird, obviously you are in a cult.
  4. Machines can't be sentient, therefore AI is impossible for some reason.
  5. AIs can't possibly get out of the box, we would just pull the plug.
  6. Who are we to impose our values on an AI? That's like something a mean dad would do.
Comment author: TheAncientGeek 07 June 2015 05:20:46PM 0 points [-]

There's also better arguments, like

"We wouldn't build a god AI and put it in charge of the world"

"We would make some sort of attempt at installing safety overrides"

" Tool AI is safer and easier, and easier to make safe, and wouldn't need goals to be aligned with ours"

"Well be making ourselves smarter in parallel"