Psy-Kosh comments on Omission vs commission and conservation of expected moral evidence - Less Wrong

2 Post author: Stuart_Armstrong 08 September 2014 02:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: Psy-Kosh 10 September 2014 07:39:12PM 3 points [-]

I'm not sure commission/omission distinction is really the key here. This becomes clearer by inverting the situation a bit:

Some third party is about to forcibly wirehead all of humanity. How should your moral agent reason about whether to intervene and prevent this?

Comment author: Stuart_Armstrong 15 September 2014 03:06:52PM 1 point [-]

That's interesting - basically here we're trying to educate an AI into human values, but human values are going to swiftly be changed to something different (and bad from our perspective).

I think there's no magical solution - either we build a FAI properly (which is very very hard), and it would stop the third party, or we have an AI that we value load and try and prevent our values from changing while it's happening.

The omission/commission thing applies to value loading AIs, not to traditional FAI. But I admit it's not the best analogy.