Mitchell_Porter comments on Holden's Objection 1: Friendliness is dangerous - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (428)
Do you think an AI reasoning about ethics would be capable of coming to your conclusions? And what "superintelligence policy" do you think it would recommend?
I'm pretty sure that FAI+CEV is supposed to prevent exactly this scenario, in which an AI is allowed to come to its own, non-foreordained conclusions
FAI is supposed to come to whatever conclusions we would like it to come to (if we knew better etc.). It's not supposed to specify the whole of human value ahead of time, it's supposed to ensure that the FAI extrapolates the right stuff.