Mitchell_Porter comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (146)
The end of SI's mission, in success, failure, or change of paradigm.
There's one reality so all "true ontologies" ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there's more than one agent. It's not even clear that just picking out the moral component of the human decision procedure is enough for SI's purposes. What FAI research is really after is "decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI".