Perplexed comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 01 November 2010 12:44:12AM 10 points [-]

Being social is advantageous to any entity without terminal goals [your emphasis]

I can't accept this. Many animals are not social, or are social only to the extent of practicing parental care.

A super-intelligent but non-evolved AGI will figure out that social is advantageous as well.

Only if it is actually advantageous to them (it?). Your claim would be much more convincing if you could provide examples of what AIs might gain by social interaction with humans, and why the AI could not achieve the same benefits with less risk and effort by exterminating or enslaving us. Without such examples, your bare assertions are completely unconvincing.

Please note that as humans evolved to their current elevated moral plane <cough, excuse me> they occasionally found extermination and enslavement to be more tempting solutions to their social problems than reciprocity. In fact, enslavement is a form of reciprocity - it is one possible solution to a bargaining problem as in Nash(1953). A solution in which one bargainer has access to much better threats than the other.