Caledonian2 comments on Morality as Fixed Computation - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
T,mJ: for some time now, Eliezer has been arguing from a position of moral relativism, implicitly adopting the stance that increased intelligence has no implications for the sorts of moral or ethical systems an entity will possess.
He has essentially been saying that we need to program a moral system we feel is appropriate into the AI and constrain it so that it cannot operate outside of that system. Its greater intelligence will then permit it to understand the implications of actions better than we can, and it will act in ways aligned with our chosen morality while having greater ability to plan and anticipate.