Toggle comments on Superintelligence 9: The orthogonality of intelligence and goals - Less Wrong

8 Post author: KatjaGrace 11 November 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread. Show more comments above.

Comment author: Toggle 11 November 2014 06:31:35PM 2 points [-]

Potential source of misunderstanding: we do have stated 'terminal goals', sometimes. But these goals do not function in the same way that a paperclipper utility function maximizes paperclips- there are a very weird set of obstacles, which this site generally deals with under headings like 'akrasia' or 'superstimulus'. Asking a human about their 'terminal goal' is roughly equivalent to the question 'what would you want, if you could want anything?' It's a form of emulation.

Comment author: Lumifer 11 November 2014 06:45:35PM 0 points [-]

But these goals do not function in the same way that a paperclipper utility function maximizes paperclips

Sure, because humans are not utility maximizers.

The question, however, is whether terminal goals exist. A possible point of confusion is that I think of humans as having multiple, inconsistent terminal goals.

Here's an example of a terminal goal: to survive.