Kindly comments on Fake Causality - Less Wrong

41 Post author: Eliezer_Yudkowsky 23 August 2007 06:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: CuSithBell 04 June 2012 02:49:18PM 0 points [-]

You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?

I am saying pretty much exactly that. To clarify further, the words "deliberate", "conscious" and "wants" again belong to the level of emergent behavior: they can be used to describe the agent, not to explain it (what could not be explained by "the agent did X because it wanted to"?).

Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?

Let's instead make an attempt to explain. A complete control of an agent's own code, in the strict sense, is in contradiction of Gödel's incompleteness theorem. Furthermore, information-theoretic considerations significantly limit the degree to which an agent can control its own code (I'm wondering if anyone has ever done the math. I expect not. I intend to look further into this). In information-theoretic terminology, the agent will be limited to typical manipulations of its own code, which will be a strict (and presumably very small) subset of all possible manipulations.

This seems trivially false - if an AI is instantiated as a bunch of zeros and ones in some substrate, how could Godel or similar concerns stop it from altering any subset of those bits?

Can an agent be made more effective than humans in manipulating its own code? I have very little doubt that it can. Can it lead to agents qualitatively more intelligent than humans? Again, I believe so. But I don't see a reason to believe that the code-rewriting ability itself can be qualitatively different than a human's, only quantitatively so (although of course the engineering details can be much different; I'm referring to the algorithmic level here).

You see reasons to believe that any artificial intelligence is limited to altering its motivations and desires in a way that is qualitatively similar to humans? This seems like a pretty extreme claim - what are the salient features of human self-rewriting that you think must be preserved?

Generally GAIs are ascribed extreme powers around here

As you've probably figured out, I'm new here. I encountered this post while reading the sequences. Although I'm somewhat learned on the subject, I haven't yet reached the part (which I trust exists) where GAI is discussed here.

On my path there, I'm actively trying to avoid a certain degree of group thinking which I detect in some of the comments here. Please take no offense, but it's phrases like the above quote which worry me: is there really a consensus around here about such profound questions? Hopefully it's only the terminology which is agreed upon, in which case I will learn it in time. But please, let's make our terminology "pay rent".

I don't think it's a "consensus" so much as an assumed consensus for the sake of argument. Some do believe that any hypothetical AI's influence is practically unlimited, some agree to assume that because it's not ruled out and is a worst-case scenario or an interesting case (see wedrifid's comment on the grandparent (aside: not sure how unusual or nonobvious this is, but we often use familial relationships to describe the relative positions of comments, e.g. the comment I am responding to is the "parent" of this comment, the one you were responding to when you wrote it is the "grandparent". I think that's about as far as most users take the metaphor, though.)).

Comment author: royf 04 June 2012 11:27:53PM 0 points [-]

Thanks for challenging my position. This discussion is very stimulating for me!

Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?

I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into its bloodstream and synapses?

This seems trivially false - if an AI is instantiated as a bunch of zeros and ones in some substrate, how could Godel or similar concerns stop it from altering any subset of those bits?

It's not a question of stopping it. Gödel is not giving it a stern look, saying: "you can't alter your own code until you've done your homework". It's more that these considerations prevent the agent from being in a state where it will, in fact, alter its own code in certain ways. This claim can and should be proved mathematically, but I don't have the resources to do that at the moment. In the meanwhile, I'd agree if you wanted to disagree.

You see reasons to believe that any artificial intelligence is limited to altering its motivations and desires in a way that is qualitatively similar to humans? This seems like a pretty extreme claim - what are the salient features of human self-rewriting that you think must be preserved?

I believe that this is likely, yes. The "salient feature" is being subject to the laws of nature, which in turn seem to be consistent with particular theories of logic and probability. The problem with such a claim is that these theories are still not fully understood.

Comment author: Kindly 05 June 2012 01:58:27AM *  0 points [-]

It's not a question of stopping it. Gödel is not giving it a stern look, saying: "you can't alter your own code until you've done your homework". It's more that these considerations prevent the agent from being in a state where it will, in fact, alter its own code in certain ways. This claim can and should be proved mathematically, but I don't have the resources to do that at the moment. In the meanwhile, I'd agree if you wanted to disagree.

I'd like to understand what you're saying here better. An agent instantiated as a binary program can do any of the following:

  • Rewrite its own source code with a random binary string.

  • Do things until it encounters a different agent, obtain its source code, and replace its own source code with that.

It seems to me that either of these would be enough to provide "complete control" over the agent's source code in the sense that any possible program can be obtained as a result. So you must mean something different. What is it?

Comment author: royf 05 June 2012 02:19:29AM *  1 point [-]

Rewrite its own source code with a random binary string

This is in a sense the electronic equivalent of setting oneself on fire - replacing oneself with maximum entropy. An artificial agent is extremely unlikely to "survive" this operation.

any possible program can be obtained as a result

Any possible program could be obtained, and the huge number of possible programs should hint that most are extremely unlikely to be obtained.

I assumed we were talking about an agent that is active and kicking, and with some non-negligible chance to keep surviving. Such an agent must have a strongly non-uniform distribution over its next internal state (code included). This means that only a tiny fraction of possible programs will have any significant probability of being obtained. I believe one can give a formula for (at least an upper bound on) the expected size of this fraction (actually, the expected log size), but I also believe nobody has ever done that, so you may doubt this particular point until I prove it.

Comment author: Kindly 05 June 2012 02:30:49AM 0 points [-]

I don't think "surviving" is a well-defined term here. Every time you self-modify, you replace yourself with a different agent, so in that sense any agent that keeps surviving is one that does not self-modify.

Obviously, we really think that sufficiently similar agents are basically the same agent. But "sufficiently similar" is vague. Can I write a program that begins by computing the cluster of all agents similar to it, and switches to the next one (lexicographically) every 24 hours? If so, then it would eventually take on all states that are still "the same agent".

The natural objection is that there is one part of the agent's state that is inviolate in this example: the 24-hour rotation period (if it ever self-modified to get rid of the rotation, then it would get stuck in that state forever, without "dying" in an information theoretic sense). But I'm skeptical that this limitation can be encoded mathematically.

Comment author: royf 05 June 2012 02:56:43AM *  0 points [-]

I don't think "surviving" is a well-defined term here. Every time you self-modify, you replace yourself with a different agent, so in that sense any agent that keeps surviving is one that does not self-modify.

I placed "survive" in quotation marks to signal that I was aware of that, and that I meant "the other thing". I didn't realize that this was far from clear enough, sorry.

For lack of better shared terminology, what I meant by "surviving" is continuing to be executable. Self modification is not suicide, you and I are doing it all the time.

Can I write a program that begins by computing the cluster of all agents similar to it, and switches to the next one (lexicographically) every 24 hours?

No, you cannot. This function is non-computable in the Turing sense.

A computable limited version of it (whatever it is) could be possible. But this particular agent cannot modify itself "in any way it wants", so it's consistent with my proposition.

The natural objection is that there is one part of the agent's state that is inviolate in this example: the 24-hour rotation period

This is a very weak limitation of the space of possible modifications. I meant a much stronger one.

But I'm skeptical that this limitation can be encoded mathematically.

This weak limitation is easy to formalize.

The stronger limitation I'm thinking of is challenging to formalize, but I'm pretty confident that it can be done.

Comment author: Kindly 05 June 2012 03:22:32AM 0 points [-]

No, you cannot. This function is non-computable in the Turing sense.

Aha! I think this is the important bit. I'll have to think about this, but it's probably what the problem is.

Comment author: Strange7 05 June 2012 03:20:15AM 0 points [-]

In addition to the rotation period, the "list of sufficiently similar agents" would become effectively non-modifiable in that case. If it ever recalculated the list, starting from a different baseline or with a different standard of 'sufficiently similar,' it would not be rotating, but rather on a random walk through a much larger cluster of potential agent-types.