TheAncientGeek comments on An overall schema for the friendly AI problems: self-referential convergence criteria - Less Wrong

17 Post author: Stuart_Armstrong 13 July 2015 03:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (110)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 21 July 2015 08:11:51PM *  0 points [-]

Wrong kind of intuition

If you have an extenal standard, as you do with probability theory and logic, system 2 can learn utilitarianism, and its performance can be checked against the external standard.

But we don't have an agreed standard to compare system 1 ethical reasoning against, because we haven't solved ,moral philosophy. What we have is system 1 coming up with speculative theories,which have to be checked against intuition, meaning an internal standard

Comment author: [deleted] 21 July 2015 11:23:20PM 0 points [-]

Again, the whole point of this task/project/thing is to come up with an explicit theory to act as an external standard for ethics. Ethical theories are maps of the evaluative-under-full-information-and-individual+social-rationality territory.

Comment author: TheAncientGeek 22 July 2015 07:45:58AM *  0 points [-]

Again, the whole point of this task/project/thing is to come up with an explicit theory to act as an external standard for ethics. 

And that is the whole point of moral philosophy..... so it's sounding like a moot distinction.

Ethical theories are maps of the evaluative-under-full-information-and-individual+social-rationality territory.

You don't like the word intuition, but the fact remains that while you are building your theory, you will have to check it against humans ability to give answers without knowing how they arrived at them. Otherwise you end up with a clear, consistent theory that nobody finds persuasive.