Lumifer comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 11 November 2014 06:45:35PM 0 points [-]

But these goals do not function in the same way that a paperclipper utility function maximizes paperclips

Sure, because humans are not utility maximizers.

The question, however, is whether terminal goals exist. A possible point of confusion is that I think of humans as having multiple, inconsistent terminal goals.

Here's an example of a terminal goal: to survive.