You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mitchell_Porter comments on Friendly AI and the limits of computational epistemology - Less Wrong Discussion

18 Post author: Mitchell_Porter 08 August 2012 01:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (146)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 10 August 2012 02:22:55AM 0 points [-]

To the end of what? The sequence? Or the humanity as we know it?

The end of SI's mission, in success, failure, or change of paradigm.

Is there one "true" ontology/morality?

There's one reality so all "true ontologies" ought to be specializations of the same truth. One true morality is a shakier proposition, given that morality is the judgment of an agent and there's more than one agent. It's not even clear that just picking out the moral component of the human decision procedure is enough for SI's purposes. What FAI research is really after is "decision procedure that a sober-minded and fully-informed human being would prefer to be employed by an AGI".