Manfred comments on Wanted: backup plans for "seed AI turns out to be easy" - Less Wrong

18 Post author: Wei_Dai 28 September 2011 09:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (62)

You are viewing a single comment's thread.

Comment author: Manfred 29 September 2011 04:52:00AM *  1 point [-]

Friendliness may be hard for philosophical reasons, but beyond a certain level of software sophistication (goals in terms of an objective reality, can model humans) it's probably not that hard to have AI that has non trivially-bad goals and won't become significantly smarter than you until you agree it's safe. The problem with just studying safe AIs for a while (or working for a few years on improving humans, or trying to maintain the status quo) is that eventually an idiot or a bad guy will make a smarter than human intelligence.

So my favorite backup plan would be disseminating information about how to not catastrophically fail and trying to finalize a FAI goal system quickly.