Nornagest comments on The Backup Plan - Less Wrong

1 Post author: Luke_A_Somers 13 October 2011 07:53PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Nornagest 14 October 2011 12:21:56AM *  3 points [-]

The problem here is that big-F Friendliness is a much stricter criterion than merely human altruism: people can self-modify, for example, but only slowly and to a limited extent, and thus don't experience anything close to the same goal stability problems that we can expect a seed AI to encounter.

Even if that were not the case, though, altruism's already inadequate to prevent badly suboptimal outcomes, especially when people are placed in unusual circumstances or empowered far beyond their peers. Not every atrocity has standing behind it some politician or commander glowing with a true and honest belief in the righteousness of the cause, but it's a familiar pattern, isn't it? I don't think the OP deserves to be dismissed out of hand, but if there's an answer it's not going to be this easy.

Comment author: VincentYu 14 October 2011 12:30:24AM *  0 points [-]

I completely agree with you.

Perhaps I came off a bit snarky. I did not mean to dismiss the OP; I just wanted to point out similarities. How can I make this clear in my original comment?