Lumifer comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (78)
Sure, because humans are not utility maximizers.
The question, however, is whether terminal goals exist. A possible point of confusion is that I think of humans as having multiple, inconsistent terminal goals.
Here's an example of a terminal goal: to survive.