JamesAndrix comments on No Universally Compelling Arguments - Less Wrong

33 Post author: Eliezer_Yudkowsky 26 June 2008 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (46)

Sort By: Old

You are viewing a single comment's thread.

Comment author: JamesAndrix 28 June 2008 06:22:37AM 0 points [-]

Roko: Very very roughly: You should increase your ability to think about morality.

In this context I guess I would justify it by saying that if an AI's decision-making process isn't kicking out any goals, it should be designed to think harder. I don't think 'doing nothing' is the right answer to a no-values starting point. To the decider it's just another primitive action that there is no reason to prefer.

The strategy of increasing your on ability to think about morality/utility choices/whatever has the handy property of helping you with almost any supergoal you might adopt. If you don't know what to adopt, some variant of this is the only bet.

I think this is related to "if you build an AI that two-boxes on Newcomb's Problem, it will self-modify to one-box on Newcomb's Problem".

The trick here is this: No 'is' implies an 'ought'. The initial 'ought' is increasing one's utility score.

Obviously if an AI adopted this it might decide to eat us and turn us into efficient brains. I interpret this morality in a way that makes me not want that to happen, but I'm not sure if these interpretations are The Right Answer, or just adopted out of biases. Morality is hard. (read: computationally intense)

It's late so I'll stop there before I veer any further into crackpot.