Sebastian_Hagen comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (78)
It's a direct consequence of the orthogonality thesis. Bostrom (reasonably enough) supposes that there might be a limit in the opposite direction - to hold a goal you do need to be able to model it to some degree, so agent intelligence may set an upper bound on the complexity of goals the agent can hold - but there's no corresponding reason for a limit in the opposite direction: Intelligent agents can understand simple goals just fine. I don't have a problem reasoning about what a cow is trying to do, and I could certainly optimize towards the same had my mind been constructed to only want those things.
I don't understand your reply.
How would you know that there's no reason for terminal goals of a superintelligence "to get complicated" if humans, being "simple agents" in this context, are not sufficiently intelligent to consider highly complex goals?