Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

PhilGoetz comments on GAZP vs. GLUT - Less Wrong

33 Post author: Eliezer_Yudkowsky 07 April 2008 01:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (166)

Sort By: Old

You are viewing a single comment's thread.

Comment author: PhilGoetz 02 September 2013 05:00:26PM 0 points [-]

In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be "moral".

A good counter to this argument would be to find a culture with morals strongly opposed to our own, and demonstrate that it is logical and internally consistent. My inability to think of such a culture could be interpreted as evidence that a sufficiently-powerful AI would be moral. But I think it's more likely that the morals we agree on are properties common to most moral frameworks that are workable in our particular biological and technological circumstances. You should be able to demonstrate that an AI need not be moral by our standards, by writing a story that takes place in a world with technology and biology different enough so that our morals are substandard. But nobody would publish it.