Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

RomeoStevens comments on AI safety: three human problems and one AI issue - Less Wrong Discussion

9 Post author: Stuart_Armstrong 19 May 2017 10:48AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: RomeoStevens 19 May 2017 08:32:37PM *  3 points [-]

I think values are confusing because they aren't a natural kind. The first decomposition that made sense was 2 axes: stated/revealed and local/global

stated local values are optimized for positional goods, stated global values are optimized for alliance building, revealed local are optimized for basic needs/risk avoidance, revealed global barely exist and when they do are semi-random based on mimesis and other weak signals (humans are not automatically strategic etc.)

Trying to build a coherent picture out of various outputs of 4 semi independent processes doesn't quite work. Even stating it this way reifies values too much. I think there are just local pattern recognizers/optimizers doing different things that we have globally applied this label of 'values' to because of their overlapping connotations in affordance space and because switching between different levels of abstraction is highly useful for calling people out in sophisticated hard to counter ways in monkey politics.

Also useful to think of local/global as Dyson's birds and frogs, or surveying vs navigation.

I'm unfamiliar with existing attempts at value decomposition if anyone knows of papers etc.

On predictions, humans treating themselves and others as agents seems to lead to a lot of problems. Could also deconstruct poor predictions based on which sub-system it runs into the limits of: availability, working memory, failure to propagate uncertainty, inconsistent time preferences...can we just invert the bullet points from superforecasting here?