You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on Value evolution - Less Wrong Discussion

14 Post author: PhilGoetz 08 December 2011 11:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (111)

You are viewing a single comment's thread.

Comment author: timtyler 09 December 2011 12:05:42PM *  1 point [-]

Is there any reason to think this process will converge, rather than diverge more and more, as it has for all of history? If there is, it has not been articulated.

Future creatures will probably have bigger genomes, bigger sef-descriptions, and so bigger moralities - assuming, of course, that their morality refers to themselves. There might be practical limits on creature size - but these are probably large, leaving a lot of space for evolution in the mean time.

The idea that values will freeze arises out of an analysis of self-improving systems, that claims that agents will want to preserve their values (e.g. see Omohundro's "Basic AI Drives"). In a competitive scenario, agents won't get their way. So: folks imagine one big organism and self-directed evolution - and that it will get its way.

One reason for scepicism about this is the alien race. If our values freeze - and then we meet aliens - we would probably be assimilated. So - lacking confidence that aliens do not exist - we may decide to allow our values to grow - in order to better preserve at least some of them.

Comment author: PhilGoetz 09 December 2011 03:26:02PM 0 points [-]

How does a self-improving system improve itself, without discovering contradictions or gaps in its values? Does value freeze require knowledge freeze?

Comment author: timtyler 09 December 2011 03:43:44PM *  2 points [-]

How does a self-improving system improve itself, without discovering contradictions or gaps in its values?

By getting a faster brain, more memory, more stored resources and a better world model, perhaps.

Values don't have to have "contradictions" or "gaps" in them. Say you value printing out big prime numbers. Where are the contradictions or gaps going to come from?

Does value freeze require knowledge freeze?

Usually values and knowledge are considered to be orthogoonal - so "no".