You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong Discussion

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 11 November 2014 05:44:53PM 1 point [-]

Do you mean intrinsic (top-level, static) goals, or instrumental ones (subgoals)? Bostrom in this chapter is concerned with the former, and there's no particular reason those have to get complicated.

I mean terminal, top-level (though not necessarily static) goals.

As to "no reason to get complicated", how would you know? Note that I'm talking about a superintelligence, which is far beyond human level.

Comment author: Sebastian_Hagen 11 November 2014 07:26:15PM 1 point [-]

As to "no reason to get complicated", how would you know?

It's a direct consequence of the orthogonality thesis. Bostrom (reasonably enough) supposes that there might be a limit in the opposite direction - to hold a goal you do need to be able to model it to some degree, so agent intelligence may set an upper bound on the complexity of goals the agent can hold - but there's no corresponding reason for a limit in the opposite direction: Intelligent agents can understand simple goals just fine. I don't have a problem reasoning about what a cow is trying to do, and I could certainly optimize towards the same had my mind been constructed to only want those things.

Comment author: Lumifer 12 November 2014 04:45:14AM 1 point [-]

I don't understand your reply.

How would you know that there's no reason for terminal goals of a superintelligence "to get complicated" if humans, being "simple agents" in this context, are not sufficiently intelligent to consider highly complex goals?