You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

torekp comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong Discussion

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread.

Comment author: torekp 11 November 2014 06:45:46PM 2 points [-]

what matters for Bostrom’s definition of intelligence is whether the agent is getting what it wants

This brings up another way - comparable to the idea that complex goals may require high intelligence - in which the orthogonality thesis might be limited. I think that the very having of wants itself requires a certain amount of intelligence. Consider the animal kingdom, sphexishness, etc. To get behavior that clearly demonstrates what most people would confidently call "goals" or "wants", you have to get to animals with pretty substantial brain sizes.

The third point Bostrom makes is that a superintelligent machine could be created with no functional analogues of what we call “beliefs” and “desires”.

This contradicts the definition of intelligence via "the agent getting what it wants".