You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ChristianKl comments on Building toward a Friendly AI team - Less Wrong Discussion

24 Post author: lukeprog 06 June 2012 06:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (95)

You are viewing a single comment's thread.

Comment author: ChristianKl 07 June 2012 10:54:10PM 12 points [-]

Deeply committed to AI risk reduction. (It would be risky to have people who could be pulled off the team—with all their potentially dangerous knowledge—by offers from hedge funds or Google.)

To me this seems naive. Having someone with actually worked in SI on FAI going to Google might be a good thing. It creates connection between Google and SI. If he sees major issues inside Google that invalidate your work on FAI he might be able to alert you. If Google does something that dangerous according to the SI consensus then he's around to tell them about the danger.

Being open is a good thing.

Comment author: jacob_cannell 08 June 2012 02:40:12AM 2 points [-]

This.

At this point most of my belief in SI's chance of success lays in its ability to influence more likely AGI developers or teams towards friendliness.

Comment author: reup 08 June 2012 03:31:43AM 1 point [-]

And, if they're relying on perfect secrecy/commitment over a group of even a half-dozen researchers as the key to their safety strategy, then by their own standards they should not be trying to build an FAI.