moshez comments on How An Algorithm Feels From Inside - Less Wrong

87 Post author: Eliezer_Yudkowsky 11 February 2008 02:35AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (77)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: JulianMorrison 11 February 2008 05:44:30AM 0 points [-]

Given that this bug relates to neural structure on an abstract, rather than biological level, I wonder if it's a cognitive universal beyond just humans? Would any pragmatic AGI built out of neurons necessarily have the same bias?

Comment author: moshez 24 December 2012 09:59:54PM 2 points [-]

The same bias to...what? From the inside, the AI might feel "conflicted" or "weirded out" by a yellow, furry, ellipsoid shaped object, but that's not necessarily a bug: maybe this feeling accumulates and eventually results in creating new sub-categories. The AI won't necessarily get into the argument about definitions, because while part of that argument comes from the neural architecture above, the other part comes from the need to win arguments -- and the evolutionary bias for humans to win arguments would not be present in most AI designs.