Manfred comments on Wanted: backup plans for "seed AI turns out to be easy" - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (62)
Friendliness may be hard for philosophical reasons, but beyond a certain level of software sophistication (goals in terms of an objective reality, can model humans) it's probably not that hard to have AI that has non trivially-bad goals and won't become significantly smarter than you until you agree it's safe. The problem with just studying safe AIs for a while (or working for a few years on improving humans, or trying to maintain the status quo) is that eventually an idiot or a bad guy will make a smarter than human intelligence.
So my favorite backup plan would be disseminating information about how to not catastrophically fail and trying to finalize a FAI goal system quickly.