ChristianKl comments on A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk - Less Wrong

34 Post author: chaosmage 07 January 2014 05:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (84)

You are viewing a single comment's thread. Show more comments above.

Comment author: ChristianKl 09 January 2014 09:48:54PM 0 points [-]

We don't need to appeal to interpersonal love and hatred in order to model the fact that a rational agent is competing in a zero-sum game.

There a difference between "need to appeal" and something being a possible explanation.

Comment author: RobbBB 09 January 2014 10:31:21PM *  0 points [-]

Sure, but love and hate are rather specific posits. Empirically, the vast majority of dangerous processes don't experience them. Empirically, the vast majority of agents don't experience them. Very plausibly, the vast majority of possible intelligent agents also don't experience them. "the AI neither loves you, nor hates you" is not saying 'it's impossible to program an AI to experience love or hate'; it's saying that most plausible uFAI disaster scenarios result from AGI disinterest in human well-being rather than from AGI sadism or loathing.