JoshuaZ comments on Efficient Cross-Domain Optimization - Less Wrong

24 Post author: Eliezer_Yudkowsky 28 October 2008 04:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 25 January 2015 07:08:15PM 0 points [-]

I suspect your comments would be better received if you split them up, organized them a bit more, and made it a little more clear what central points are, and gave references for controversial claims, and defined your terms (e.g. it isn't at all clear what you mean by the bleeding heart liberal who will kill Hitler but not bomb Nagasaki).

As to actual content, it seems that your description of what a "benevolent AGI" would be here misses many of the central issues. You place a lot of emphasis on "empathy" but even too much of that could be a problem. Consider the AI that decides that it needs to reduce human suffering so it will find a way to instantaneously kill all human life. And even making an AI that can model something as complicated as "empathy" in a way that we want it to is already an insanely tough task.