Peterdjones comments on The genie knows, but doesn't care - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (515)
Read the first section of the article you're commenting on. Semantics may turn out to be a harder problem than morality, because the problem of morality may turn out to be a subset of the problem of semantics. Coding a machine to know what the word 'Friendliness' means (and to care about 'Friendliness') is just a more indirect way of coding it to be Friendly, and it's not clear why that added indirection should make an already risky or dangerous project easy or safe. What does indirect indirect normativity get us that indirect normativity doesn't?
Semantcs isn't optional. Nothing could qualify as an AGI,let alone a super one, unless it could hack natural language. So Loosemore architectures don't make anything harder, since semantics has to be solved anyway.
It's a problem of sequence. The superintelligence will be able to solve Semantics-in-General, but at that point if it isn't already safe it will be rather late to start working on safety. Tasking the programmers to work on Semantics-in-General makes things harder if it's a more complex or roundabout way of trying to address Indirect Normativity; most of the work on understanding what English-language sentences mean can be relegated to the SI, provided we've already made it safe to make an SI at all.
Then solve semantics in a seed.