You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ZZZling comments on I think I've found the source of what's been bugging me about "Friendly AI" - Less Wrong Discussion

8 Post author: ChrisHallquist 10 June 2012 02:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread. Show more comments above.

Comment author: ZZZling 12 June 2012 04:39:19AM -1 points [-]

"So if you're under the impression that this is a point..."

Yes, I'm under that impression. Because the whole idea about "Friendly AI" implies a subtle, indirect, but still control. The idea here is not to control AI at its final stage, rather to control what this final stage is going to be. But I don't think such indirect control is possible. Because in my view, the final shape of AI is invariant of any contingencies, including our attempts to make it "friendly" (or "non-friendly"). However, I can admit that on early stages of AI evolution such control may be possible, and even necessary. Therefore, researching "Friendly AI" topic is NOT a waste of time after all. It helps to figure out how to make a transition to the fully grown AI in the least painful way.

Go ahead guys and vote me down. I'm not taking this personally. I understand, this is just a quick way to express your disagreement with my viewpoints. I want to see the count. It'll give an idea, how strong you disagree with me.

Comment author: Mitchell_Porter 12 June 2012 05:26:34AM 1 point [-]

in my view, the final shape of AI is invariant of any contingencies, including our attempts to make it "friendly" (or "non-friendly")

This isn't true of human beings, what's different about AIs?

Comment author: TheOtherDave 12 June 2012 01:58:25PM *  0 points [-]

the final shape of AI is invariant of any contingencies

Ah, cool. Yes, this is definitely a point of disagreement.

For my own part, I think real intelligence is necessarily contingent. That is, different minds will respond differently to the same inputs, and this is true regardless of "how intelligent" those minds are. There is no single ideal mind that every mind converges on as its "final" or "fully grown" stage.