You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Toggle comments on Open thread, July 28 - August 3, 2014 - Less Wrong Discussion

5 Post author: polymathwannabe 28 July 2014 08:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (241)

You are viewing a single comment's thread. Show more comments above.

Comment author: Toggle 29 July 2014 06:57:01PM *  5 points [-]

Well, I definitely agree that we should make non-super intelligent AIs for study, and also for a great many other reasons. But it's perhaps less clear what 'too stupid to foom' actually means for an AGI. There was a moment when a hominid brain crossed an invisible line and civilization became possible; but the mutation precipitating that change may not have obviously been a major event from the perspective of an outside observer. It may just have looked like another in a sequence of iterative steps. Is the foom line in about the same place as the agriculture line? Is it simpler? Harder?

On the other hand, it's possible to imagine an experimental AGI with values like "Fulfill [utility function X] in the strictly defined spatial domain of Neptune, using only materials that were contained in the gravity well of Neptune in the year 2000, including the construction of your own brain, and otherwise avoid >epsilon changes to probable outcomes for the universe outside the domain of Neptune." Then fill in whatever utility function you'd like to test; you could try this with each new iteration of AGI methodology, once you are actionably worried about the possibility of fooming.