JGWeissman comments on Be a Visiting Fellow at the Singularity Institute - Less Wrong

26 Post author: AnnaSalamon 19 May 2010 08:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 25 May 2010 06:24:49AM 0 points [-]

A Friendly AI ought to protect itself from being subverted into an unfriendly AI.

Comment author: snarles 25 May 2010 10:53:51AM 1 point [-]

Let me posit that FAI may be much less capable than unfriendly AI. The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain. Therefore, a low-grade FAI might be quite vulnerable to human antagonists, while its unrestricted version could be magnitudes of order more dangerous. In short, FAI could be low-reward high-risk.

Comment author: JGWeissman 25 May 2010 05:30:41PM 1 point [-]

There are plenty of resources that an FAI could ethically obtain, and with a lead of time of less than 1 day, it could grow enough to be vastly more powerful than an unfriendly seed AI.

Really, asking which AI wins going head to head is the wrong question. The goal is to get an FAI running before unfriendly AGI is implemented.

Comment author: Vladimir_Nesov 27 May 2010 09:52:08AM *  0 points [-]

The power of unfriendly AI is that it can increase its growth rate by taking resources by force. An FAI would be limited to what resources it could ethically obtain.

Wrong. FAI will make whatever unethical steps it must, as long as it's on the net the best path it can see, taking into account both the (ethically harmful) instrumental actions and their expected outcome. There is no such general disadvantage coming with AI being Friendly. Not that I expect any need for such drastic measures (in an apparent way), especially considering the likely fist-mover advantage it'll have.