notsonewuser comments on The Meaning of Right - Less Wrong

30 Post author: Eliezer_Yudkowsky 29 July 2008 01:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (147)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Unknown 29 July 2008 01:37:17PM 0 points [-]

As I've stated before, we are all morally obliged to prevent Eliezer from programming an AI. For according to this system, he is morally obliged to make his AI instantiate his personal morality. But it is quite impossible that the complicated calculation in Eliezer's brain should be exactly the same as the one in any of us: and so by our standards, Eliezer's morality is immoral. And this opinion is subjectively objective, i.e. his morality is immoral and would be even if all of us disagreed. So we are all morally obliged to prevent him from inflicting his immoral AI on us.

Comment author: notsonewuser 12 October 2013 02:24:31AM 3 points [-]

This is a really, really hasty non-sequitur. Eliezer's morality is probably extremely similar to mine; thus, the world be a much, much better place, even according to my specification, with an AI running Eliezer's morality as opposed no AI running at all (or, worse, a paperclip maximizer). Eliezer's morality is absolutely not immoral; it's my morality +- 1% error, as opposed to some other nonhuman goal structure which would be unimaginably bad on my scale.