timtyler comments on Complexity of Value ≠ Complexity of Outcome - Less Wrong

32 Post author: Wei_Dai 30 January 2010 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (198)

You are viewing a single comment's thread.

Comment author: timtyler 30 January 2010 11:17:13AM *  2 points [-]

"rather there's a tendency to assume that complexity of value must lead to complexity of outcome"

The main problem I see here is the other way around:

There's a tendency to assume that complexity of outcome must have been produced by complexity of value.

AFAICS, it is only members of this community that think this way. Noboby else seems to have a problem with the idea of goals that can be concisely expressed - like: "trying to have as many offspring as possible" - leading to immense diversity and complexity.

This is a facet of an even more basic principle - that extremely simple rules can produce extremely complex outcomes - e.g. see the r-pentomino in the game of life.

You can see it clearly in the case of simple goals like "winning games of go". The simple goal leads to an explosion of complexity - in the form of the resulting go-playing programs.

Comment author: Peter_de_Blanc 30 January 2010 08:47:41PM 2 points [-]

Are you talking about Kolmogorov complexity or something else? Because the outcome which optimizes a simple goal would have a low Kolmogorov complexity.

Comment author: timtyler 30 January 2010 08:56:50PM *  -1 points [-]

Kolmogorov complexity is fine by me.

What make you say that? It isn't right.

Filling the universe with orgasmium involves interstellar and intergalactic travel, stellar farming, molecular nanotechnology, coordinating stars to leap between galaxies, mastering nuclear fusion, conquering any other civilisations it might meet along the way - and many other complexity-requiring activities.

Comment deleted 31 January 2010 11:41:37AM [-]
Comment author: timtyler 31 January 2010 11:47:02AM *  0 points [-]

Indeed - sorry! The r-pentomino's evolution is not a good example of high Kolmogorov complexity - though as you say, it is complex in other senses.

I had forgotten that I gave that as one of my examples when I retroactively assented to the use Kolmogorov complexity as a metric.

Comment author: Peter_de_Blanc 30 January 2010 09:10:01PM *  2 points [-]

Well, if you had a utility function over a finite set of possible outcomes, then you can run a computer program to check every outcome and pick the one with the highest utility. So the complexity of that outcome is bounded by the complexity of the set of possible outcomes plus the complexity of the utility function plus a constant.

EDIT: And none of those things you mentioned require a lot of complexity.

Comment author: timtyler 30 January 2010 09:41:38PM -1 points [-]

If the things I mentioned are so simple, perhaps you could explain how to do them?

I would be especially interested in a "simple" method of conquering any other civilisations which we might meet - so perhaps you might like to concentrate on that?

Comment author: Peter_de_Blanc 30 January 2010 09:58:40PM 2 points [-]

I would be especially interested in a "simple" method of conquering any other civilisations which we might meet

Build AIXItl.

Comment author: timtyler 30 January 2010 10:05:13PM *  3 points [-]

Alas, AIXItl is a whole class of things, many of which are likely to be highly complex.

Comment author: ciphergoth 31 January 2010 10:50:44AM 0 points [-]

This contradicts my understanding of AIXI from Shane Legg's Extrobritannia presentation. What's the variable bit? Not the utility function; that's effectively external and after the fact, and AIXI infers it.

Comment author: timtyler 31 January 2010 11:43:08AM *  0 points [-]

I think I answered that in the other sub-thread descended from the parent coment.

Comment author: Peter_de_Blanc 30 January 2010 10:19:24PM 0 points [-]

If you're referring to the parameters t and l, I'll suggest a googolplex as a sufficiently large number with low Kolmogorov complexity.

Comment author: timtyler 30 January 2010 10:38:52PM 0 points [-]

No. AIXItl will need to have other complexity - if you want it to work in a reasonable quantity of time - e.g. see, for example:

"Elimination of the factor 2˜l without giving up universality will probably be a very difficult task. One could try to select programs p and prove VA(p) in a more clever way than by mere enumeration. All kinds of ideas like, heuristic search, genetic algorithms, advanced theorem provers, and many more could be incorporated.""

Comment author: Peter_de_Blanc 30 January 2010 10:50:03PM 2 points [-]

It seems that you think "complex" means "difficult." It doesn't. Complex means "requires a lot of information to specify." There are no simple problems with complex solutions, because any specification of a problem is also a specification of its solution. This is the point of my original post.

Comment author: Wei_Dai 30 January 2010 11:41:06AM 1 point [-]

AFAICS, it is only members of this community that think this way.

Who are you referring to here? I myself wrote "Simple values do not necessarily lead to simple outcomes either."

Comment author: timtyler 30 January 2010 12:53:44PM *  0 points [-]

AFAICT, the origin of these ideas is here:

http://lesswrong.com/lw/l3/thou_art_godshatter/

http://lesswrong.com/lw/lb/not_for_the_sake_of_happiness_alone/

http://lesswrong.com/lw/lq/fake_utility_functions/

http://lesswrong.com/lw/y3/value_is_fragile/

This seems to have led a slew of people to conclude that simple values lead to simple outcomes. You yourself suggest that the simple value of "filling the universe with orgasmium" is one whose outcome would mean that "the future of the universe will turn out to be rather simple".

Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity - in addition to lots of orgasmium.

Comment author: Nick_Tarleton 30 January 2010 06:16:15PM *  2 points [-]

Things like that seem simply misguided to me. IMO, there are good reasons for thinking that that would lead to enormous complexity

...but not in the least convenient possible world with an ontologically simple turn-everything-into-orgasmium button; and the sort of complexity that you mention that (I agree) would be involved in the actual world isn't a sort that most people regard as terminally valuable.

Comment author: timtyler 30 January 2010 08:27:41PM *  -1 points [-]

Here we were talking about a superintelligent agent whose "fondest desire is to fill the universe with orgasmium". About the only way such an agent would fail to produce enormous complexity is if it died - or was otherwise crippled or imprisoned.

Whether humans would want to live - or would survive in - the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me - and it seems rather irrelevant to the point under discussion.

Comment author: Nick_Tarleton 30 January 2010 09:32:21PM *  3 points [-]

Here we were talking about a superintelligent agent whose "fondest desire is to fill the universe with orgasmium". About the only way such an agent would fail to produce enormous complexity is if it died - or was otherwise crippled or imprisoned.

Or if the agent has a button that, through simple magic, directly fills the universe with (stable) orgasmium. Did you even read what I wrote?

Whether humans would want to live - or would survive in - the same universe as an orgasmium-loving superintelligence seems like a totally different issue to me - and it seems rather irrelevant to the point under discussion.

Human morality is the point under discussion, so of course it's relevant. It seems clear that the chief kind of "complexity" that human morality values is that of conscious (whatever that means) minds and societies of conscious minds, not complex technology produced by unconscious optimizers.

Comment author: timtyler 30 January 2010 09:52:16PM *  -1 points [-]

Re: Did you even read what I wrote?

I think I missed the bit where you went off into a wild and highly-improbable fantasy world.

Re: Human morality is the point under discussion

What I was discussing was the "tendency to assume that complexity of outcome must have been produced by complexity of value". That is not specifically to do with human values.