lukeprog comments on Open Problems Related to the Singularity (draft 1) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
Not literally "any sort of AGI" of course, but... yes, several of the architecture problems required for FAI also make uFAI more probable. Kind of a shitty situation, really.
Wikipedia says Steve Ohomundro has "discovered that rational systems exhibit problematic natural 'drives' that will need to be countered in order to build intelligent systems safely."
Is he referring to the same problem?
EDIT: I answered my question by finding this.