CuSithBell comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong

20 Post author: Stuart_Armstrong 15 May 2012 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (156)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 16 May 2012 07:43:23PM *  12 points [-]

I think we don't just lack introspective access to our goals, but can't be said to have goals at all (in the sense of preference ordering over some well defined ontology, attached to some decision theory that we're actually running). For the kind of pseudo-goals we have (behavior tendencies and semantically unclear values expressed in natural language), they don't seem to have the motivational strength to make us think "I should keep my goal G1 instead of avoiding arbitrariness", nor is it clear what it would mean to "keep" such pseudo-goals as one self-improves.

What if it's the case that evolution always or almost always produces agents like us, so the only way they can get real goals in the first place is via philosophy?

Comment author: CuSithBell 16 May 2012 07:52:11PM 2 points [-]

I agree that humans are not utility-maximizers or similar goal-oriented agents - not in the sense we can't be modeled as such things, but in the sense that these models do not compress our preferences to any great degree, which happens to be because they are greatly at odds with our underlying mechanisms for determining preference and behavior.