Comment author: Chip 13 July 2008 01:49:45PM 0 points [-]

Since the human brain is not capable of recursive alteration of it's source code, and remains almost identical to the first conscious brains evolved 100,000 years ago, one must wonder if it is a tool capable of (or appropriate for) designing a friendly AI. In a time when the parabolic rate of increase in information far exceeds any possibility for natural selection to produce brains that do not rely on the evolved emotions and motivations you discuss, how can such a brain be expected to program the AI source code appropriately, when that brain is not capable of doing the same for itself? That is, how can that brain be expected to be capable of choosing what actually is "friendly", in light of its evolved state?

Comment author: Aryn 28 August 2010 11:22:16PM 0 points [-]

Where are you getting not capable?

Comment author: Aryn 28 August 2010 10:43:11PM *  1 point [-]

So, if the person discussing this, and presumably the one choosing to be rational, is C, and it must necessarily fight against a selfish, flighty and almost completely uncaring U except in the cases where it percieves a direct benifit, and furthermore is assumed to have complete or nearly complete control over the person, then why be rational? The model described here makes rationality, rather than mere rationalization, literally impossible. Therefore, why try? Or did U's just decide to force their C's into this too, making such a model deterministic in all but name?

View more: Prev