TheAncientGeek comments on A forum for researchers to publicly discuss safety issues in advanced AI - Less Wrong

12 Post author: RobbBB 13 December 2014 12:33AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (73)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 13 December 2014 04:08:10PM *  1 point [-]

The appropriate degree of anthropomorphisation when dealing with an AI made by humans, with human limitations, for human purposes is not zero.

Likewise, moral philosophy is a legitimate and important topic. But the bulk of MIRI's attention doesn't go to ems or moral philosophy.Vote upVote down

Are those claims supposed to be linked? ?..we don't need to deal with moral philosophy if we are not dealing with WBEs?

Comment author: RobbBB 13 December 2014 10:03:04PM 2 points [-]

the-citizen is replying to this thing I said:

We're trying to avoid names like "friendly" and "normative" that could reinforce someone's impression that we think of AI risk in anthropomorphic terms, that we're AI-hating technophobes, or that we're moral philosophers.

Those are just three things we don't necessarily want to be perceived as; they don't necessarily share anything else in common. However, because the second one is pejorative and the first is sometimes treated as pejorative, the-citizen was wondering if I'm anti-moral-philosophy. I replied that highly anthropomorphic AI and moral philosophy are both perfectly good fields of study, and overlap at least a little with MIRI's work; but the typical newcomer is likely to think these are more central to AGI safety work than they are.

Comment author: the-citizen 14 December 2014 07:27:18AM 0 points [-]

For the record, my current position is that if MIRI doesn't think it's central, then it's probably doing it wrong.