dlthomas comments on Holden's Objection 1: Friendliness is dangerous - Less Wrong

11 Post author: PhilGoetz 18 May 2012 12:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (428)

You are viewing a single comment's thread. Show more comments above.

Comment author: DanArmak 24 May 2012 08:24:21PM 0 points [-]

I'm assuming the 80% are capable of killing the 20% unless the AI interferes. That's part of the thought experiment. It's not unreasonable, since they are 4 times as numerous. But if you find this problematic, suppose it's 99% killing 1% at a time. It doesn't really matter.

Comment author: dlthomas 24 May 2012 08:28:40PM 1 point [-]

My point is that we currently have methods of preventing this that don't require an AI, and which do pretty well. Why do we need the AI to do it? Or more specifically, why should we reject an AI that won't, but may do other useful things?

Comment author: DanArmak 24 May 2012 08:34:02PM *  0 points [-]

There have been, and are, many mass killings of minority groups and of enemy populations and conscripted soldiers at war. If we cure death and diseases, this will become the biggest cause of death and suffering in the world. It's important and we'll have to deal with it eventually.

The AI under discussion not just won't solve the problem, it would (I contend) become a singleton and prevent me from building another AI that does solve the problem. (If it chooses not to become a singleton, it will quickly be supplanted by an AI that does try to become one.)