Wei_Dai comments on A Defense of Naive Metaethics - Less Wrong

8 Post author: Will_Sawin 09 June 2011 05:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (294)

You are viewing a single comment's thread.

Comment author: Wei_Dai 09 June 2011 09:22:45PM 2 points [-]

Can you explain what implications (if any) this "naive" metaethics has on the problem how to build an FAI?

Comment author: Will_Sawin 09 June 2011 09:29:07PM *  0 points [-]

Arguably, none. (If you already believe in CEV.)

Comment author: hairyfigment 09 June 2011 09:58:22PM 2 points [-]

Well, you say below you don't believe that (in my words)

a FOOM'd self-modifying AI that cares about humanity's CEV would likely do what you consider 'right' .

Specifically, you say

The AI would not do so, because it would not be programmed with correct beliefs about morality, in a way that evidence and logic could not fix.

You also say, in a different comment, you nevertheless believe this process

would produce an AI that gives very good answers.

Do you think humans can do better when it comes to AI? Do you think we can do better in philosophy? If you answer yes to the latter, would this involve stating clearly how we physical humans define 'ought'?

Comment author: Will_Sawin 10 June 2011 01:58:01AM 0 points [-]

Did I misread you? I meant to say:

a FOOM'd self-modifying AI would not likely do what I consider 'right' .

a FOOM'd self-modifying AI that cares about humanity's CEV would likely do what I consider 'right' .

I probably misread you.