XiXiDu comments on Open Problems Related to the Singularity (draft 1) - Less Wrong

39 Post author: lukeprog 13 December 2011 10:57AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread.

Comment author: XiXiDu 13 December 2011 12:33:36PM 15 points [-]

From my, arguably layman, perspective it seems that making progress on a lot of those problems makes unfriendly AI more probable as well. If for example you got an ideal approximation of perfect Bayesianism, this seems like something that could be used to build any sort of AGI.

Comment author: lukeprog 13 December 2011 12:41:48PM 15 points [-]

Not literally "any sort of AGI" of course, but... yes, several of the architecture problems required for FAI also make uFAI more probable. Kind of a shitty situation, really.

Comment author: Technoguyrob 18 December 2011 09:56:23AM *  0 points [-]

Wikipedia says Steve Ohomundro has "discovered that rational systems exhibit problematic natural 'drives' that will need to be countered in order to build intelligent systems safely."

Is he referring to the same problem?

EDIT: I answered my question by finding this.