MrCogmor comments on Open Thread for January 17 - 23 2014 - Less Wrong

3 Post author: niceguyanon 17 January 2014 01:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (191)

You are viewing a single comment's thread. Show more comments above.

Comment author: mwengler 17 January 2014 09:28:42PM 3 points [-]

If a human was artificial, would it be considered FAI or UAI? I'm guessing UAI because I don't think anything like the process of CEV has been followed to set human's values at birth.

If a human would be UAI if artificial, why are we less worried about billions of humans than we are about 1 UAI? What is it about being artificial that makes unfriendliness so scary? What is it about being natural that makes us so blind to the possible dangers of unfriendliness?

It is that we don't think humans can self-modify? The way tech is going it seems to me that its at least a horse-race (approximately 50:50 probability) as to which will FOOM first: the ability for humans to enhance themselves vs. the ability for an AI to modify itself.

Should we be more worried about UNI, unfriendly natural intelligence, meaning are we optimally dividing our efforts between avoiding UAI vs avoiding UNI given the relative probability weighted dangers each presents?

Comment author: MrCogmor 18 January 2014 01:23:27AM 5 points [-]

Humans would be considered UFAI if they were digitised. Merely consider a button that picks a random human and gives them absolute control. I wouldn't press that button because their is a significant chance that such a person will have goals that significantly differed from my own.