torekp comments on Superintelligence 16: Tool AIs - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
I found the idea of an AI that is not goal-directed very enticing. It seemed the perfect antidote to Omohundro et. al. on universal instrumental goals, because the latter arguments rely on a utility function, something that even human beings arguably don't have. A utility function is a crisp mathematical idealization of the concept of goal-direction. (I'll just assert that without argument, and hope it rings true.) If human beings, the example sine qua non of intelligence, don't exactly have utilities, might it not be possible to make other forms of intelligence that are even further from goal-directed behavior?
Unfortunately, Paul Christiano has convinced me that I was probably mistaken: