You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Ethical Diets - Less Wrong Discussion

2 Post author: pcm 12 January 2015 11:38PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

You are viewing a single comment's thread.

Comment author: MrMind 13 January 2015 08:04:29AM 4 points [-]

If the way an AGI treats us would depend upon the way we treat animals, the problem of a Friendly AI would already be partially solved. But there's no way to think it will: if you don't want an AI to treat the way you treat a cow, <easier said than done> then don't program it that way. </easier said than done>

Comment author: pcm 13 January 2015 07:08:12PM 1 point [-]

If you're certain that the world will be dominated by one AGI, then my point is obviously irrelevant.

If we're uncertain whether the world will be dominated by one AGI or by many independently created AGIs whose friendliness we're uncertain of, then it seems like we should both try to design them right and try to create a society where, if no single AGI can dictate rules, the default rules for AGI to follow when dealing with other agents will be ok for us.

Comment author: freeze 03 September 2015 03:34:46PM -1 points [-]

You seem to allude to the fact that it really isn't that easy. In fact, if it is truly an AGI then by definition we can't just box in its values in that way/make one arbitrary change to its values.

Instead, I would say if you don't want an AI to treat us like we treat cows, then just stop eating cow flesh/bodily fluids. This seems a more robust strategy to shape the values of an AI we create, and furthermore it prevents an enormous amount of suffering and improves our own health.