The most simple, able to self-improve "seed" for a superintelligence must be how complex?

Give your (wild) estimation or a pure guess in the terms of bits.

Do you reckon it is 1000 bits enough? At least 1000000 is required? More than 10^20 bits?

I wonder what your approximate numbers are.

Thank you!

New Comment
30 comments, sorted by Click to highlight new comments since:

The K-complexity of a superintelligence can't be higher than the K-complexity of a program that searches for a superintelligence, enumerating all programs in order and asking them hard questions to check their abilities. I don't see any reason why the latter should be over a megabyte, probably much less.

It must be, because this verification procedure also produces infinites false positives, for example all the hash tables which happen to have the correct answers by chance. That is, the procedure doesn't produce more information than simply asking "Are you a superintelligence?".

I don't see any reason why the latter should be over a megabyte, probably much less.

In contrast, it seems to me like the verification procedure for a friendly superintelligence could be quite long. Do you agree with this? It would definitely be longer than for just any superintelligence, but I haven't got a clue about how much; just that we could potentially be adding some pretty heavy constraints.

Yes, I agree. An upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains, but you could probably compress that a lot.

Yes, I agree. An upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains, but you could probably compress that a lot.

False. The upper bound on the complexity of a Friendly superintelligence is [(total information content of all brains) + (minimum complexity of a what it takes to identify a superintelligence with a defined goal of deriving its objective from a set of agents according to some mechanism that represents what it would mean to be 'friendly' to them)].

ie. Just having all the information content of all human brains is not enough. You can't avoid defining Friendliness in the search program.

I agree with wedrifid here. We don't seem to have a valid argument showing that "an upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains". I would like to point out that if the K-complexity of friendly superintelligence is greater than that, then there is no way for us to build a friendly superintelligence except by luck (i.e., most Everett branches are doomed to fail to build a friendly superintelligence) or by somehow exploiting uncomputable physics.

Technically you can cheat by using the information in human brains to create upload-based superintelligences along the lines of Stuart's proposal, make them do the research for you, etc., so it seems likely to me that the upper bound should hold... but I appreciate your point and agree that my comment was wrong as stated.

When talking about upper bounds we cannot afford to just cheat saying humans can probably figure it out. That isn't an upper bound - it is an optimistic prediction about human potential. Moreover we still need a definition of Friendliness built in so we can tell whether this thing that the human researchers come up with is Friendliness or some other thing with that name. (Even an extremely 'meta' reference to a definition is fine but still requires more bits to point to which part of the humans is able to define Friendliness.)

Upper bounds are hard. But yes, I know your understanding of the area is solid and your ancestor comment serves as the definitive answer to the question of the post. I disagree only with the statement as made.

False.

You are most likely not disagreeing with the intended meaning, so using words like "false" to motivate the clarification you were making is wrong.

You are most likely not disagreeing with the intended meaning, so using words like "false" to motivate the clarification you were making is wrong.

No Vladimir. My disagreement was with the statement and the statement was, indeed false. That doesn't mean I disagree with cousin_it's overall philosophy. It just means I am calling a single false claim false.

You are wrong.

I'm confused - a Friendly AI doesn't start with information about the insides of brains, it only starts with enough information to correctly identify human beings in order to know whose preferences to infer its values from.

but you could probably compress that a lot.

This definitely fits my intuition. It seems like CEV style friendliness, for example, could at least be reduced to something on the order of complexity of a single "averaged" human brain. This is all very vague of course, I'm mostly just trying to feel out some intuitions on this.

It seems like CEV style friendliness, for example, could at least be reduced to something on the order of complexity of a single "averaged" human brain.

That doesn't sound like it passes the adequately-friendly-to-wedrifid test. I'd thermite such an AI rather than press the on button.

That seems difficult to determine without unpacking what I mean by "averaged"; how did you come to that conclusion? (I'm wondering if you have a clearer concept of what it would mean to "average" over brains, my intuition is nothing more than a sort of vague feeling that makes me think that I should study certain topics and ask certain questions) I don't even know enough to accurately convey my intuition behind my use of "averaged", I was just hoping that it might elicit a response from someone that would give me some useful information that would, in turn, help me in my task of understanding CEV better.

Unfortunately nobody has a good answer to your question. At least none that would satisfy me. The aggregation problem is the hardest part of the problem and hasn't been answered adequately yet. Without such an answer CEV remains basically undefined. (This is something that troubles me.)

I see. Thanks; that is enlightening. I had gotten the impression that this was the case, but wasn't sure.

Interesting. How does the program determine hard questions (and their answers) without qualifying as generating them itself? That is, the part about enumerating other programs seems superfluous.

[Edit added seconds later] Ok, I see perhaps it could ask something like 'predict what's gonna happen 10 seconds from now', and then wait to see if the prediction is correct after the universe runs for that long. In that case, the parent program isn't computing the answer itself.

You don't need to be able to generate the answer to your hard problem yourself, only to check that the superintelligence's offered answer is correct. These two abilities are equivalent if computing resources are unlimited, but you could run the superintelligence for a limited time... This line of thought seems to lead into the jungle of complexity theory and you should probably take my comments with a grain of salt :-)

Legg's 2006 "Is There an Elegant Universal Theory of Prediction?" may be relevant.

(The answer, BTW, is 'no'; seems to be in the usual Godelian limit vein of thought: "In this paper, it is shown that although highly powerful algorithms exist, they are necessarily highly complex.")

The paper seems not very quantative. It is not obvious from it whether a human needs a thousand bits, a million bits, a trillion bits - or whatever.

It would be quite impressive if it were able to...

My point was that Legg has shown, as I understand it, that any powerful prediction algorithm which is powerful enough to predict most/all of the universe (as one would expect a fearsome AGI to able to do) will be at least as complex as the universe it's predicting.

[-]gjm40

It seems likely that the answer depends on how rapidly and how reliably you want your "seed" to be able to turn into an actual working superintelligence. For instance:

Something along the lines of AIXI might have a very short program indeed, but not do anything useful unless you gave it more computing power and time than the entire universe can offer. (It is AIUI an open question whether that's actually finite. Let's suppose it is.)

A lots-of-humans simulator might well produce a superintelligence but its inhabitants might instead wipe themselves out in a (simulated) nuclear war, or just turn out not to be smart enough to make a working AI, or something.

Some entertain the hypothesis that the K-complexity of the entire universe may be under 1000 bits - though nobody knows if that is true or not.

A few bacteria will have created the explosion that leads to any future superintelligence. Much of the resulting complexity came not from them, but from their environment, though. Also, that took a while. It seems as though time limits would be needed if you are looking for a different kind of answer.

Some entertain the hypothesis that the K-complexity of the entire universe may be under 1000 bits - though nobody knows if that is true or not.

More can be said about a single apple than about all the apples in the world.

Handing me an entire universe and saying "here you go, there is a superintelligence in there somewhere, find it yourself" does not qualify as giving me a superintelligence (in the information sense). In fact I need the same amount of information to find a superintelligence in the "1,000 bit universe" as the minimum K-complexity of the superintelligence itself.

The K-complexity of the entire universe is basically irrelevant.

In fact I need the same amount of information to find a superintelligence in the "1,000 bit universe" as the minimum K-complexity of the superintelligence itself.

Well, perhaps minus 1,000 bits. Those 1,000 bits might be screening off a lot of universes with no superintelligences in them, so they could matter a great deal. If so, that's a smaller space to search - by a factor of 2^1,000.

A description of a superintelligence might be quite easy to find in our universe; for example, if the entire universe were tiled with AIs for most of its lifetime, it may not require much information to point to one of them once you have described the universe. If the Kolmogorov complexity of the universe were really only 1000, this could easily beat cousn_it's suggestion.

This is an upper bound not a lower bound. That roughly means that whenever you don't have proof to the contrary you assume the worst.

What? Cousin_it gave an upper bound. I (and timtyler) are pointing out a plausible way the value could be significantly below that upper bound.

It is true that the grandparent does not fit this context. Mistake!

If what you really want is a program that can run on a real supercomputer and reach super-human intelligence in less than a month, that's not just measuring its complexity. But I'd guess it could fit on a floppy disk.