royf comments on Fake Causality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (86)
Sure we can!
In fact, we can't stop rewriting our own code.
When you use the word "code" to describe humans, you take a certain degree of semantic liberty. So we first need to understand what is meant by "code" in this context.
In artificial computing machines, code is nothing more than a state of a chunk of memory hardware that causes the computation hardware to operate in a certain way (for a certain input). Only a tiny subset of the possible states of any chunk of memory hardware are "executable", i.e. don't cause the computation hardware to reach a certain "failure" state. This gives us an almost clear-cut distinction between (executable) code and (non-executable) data, under the assumption that data is very unlikely to be executable by chance. Given the correct design, a machine can write code to its memory and then execute it.
In humans, the distinction between memory hardware and computation hardware is unclear, if it exists at all. Moreover, it's unclear how to apply the above distinction between code and data: what is a human's "failure" state? I guess a state of the brain (containing both memory and computation hardware, until and unless we can ever separate the two) can be said to be "executable" if, placed in a certain environment, the person doesn't go and die.
It follows that any change that the brain does to its own state, which then affects its computation, to the result of not dying, is, in fact, "rewriting its own code". This, of course, happens all the time and (perhaps ironically) cannot be stopped without killing the brain.
In a wider loop, we also have drugs, medications and, eventually, gene therapy. But that's more similar to a robot reaching for the keyboard (or the screwdriver).
Not the way a properly designed AI could. The difference is qualitative.
To be fair, when structured as
then the claim is in fact "we humans can't rewrite our own code (but a properly designed AI could)". If you remove a comma:
only then is the sentence interpreted as you describe.
To be even more fair I also explicitly structured my own claim such that it still technically applies to your reading. That allowed me to make the claim both technically correct to a pedantic reading and an expression of the straightforward point that the difference is qualitative. (The obvious alternative response was to outright declare the comment a mere equivocation.)
Meaning that I didn't, in fact, describe.
Not meant as an attack. I'm saying, "to be fair it didn't actually say that in the original text, so this is new information, and the response is thus a reasonable one". Your comment could easily be read as implying that this is not new information (and that the response is therefore mistaken), so I wanted to add a clarification.
Having asserted that your claim is, in fact, new information: can you please clarify and explain why you believe that?
An advanced AI could reasonably be expected to be able to explicitly edit any part of its code however it desires. Humans are unable to do this.
I believe that is a misconception. Perhaps I'm not being reasonable, but I would expect the level at which you could describe such a creature in terms of "desires" to be conceptually distinct from the level at which it can operate on its own code.
This is the same old question of "free will" again. Desires don't exist as a mechanism. They exist as an approximate model of describing the emergent behavior of intelligent agents.
You are saying that a GAI being able to alter its own "code" on the actual code-level does not imply that it is able to alter in a deliberate and conscious fashion its "code" in the human sense you describe above?
Generally GAIs are ascribed extreme powers around here - if it has low-level access to its code, then it will be able to determine how its "desires" derive from this code, and will be able to produced whatever changes it wants. Similarly, it will be able to hack human brains with equal finesse.
(Yes, and this is partly just because AIs that don't meet a certain standard are implicitly excluded from the definition of the class being described. AIs below that critical threshold are considered boring and irrelevant for most purposes.)
Indeed, the same typically goes for NIs. Though some speakers make exceptions for some speakers.
I am saying pretty much exactly that. To clarify further, the words "deliberate", "conscious" and "wants" again belong to the level of emergent behavior: they can be used to describe the agent, not to explain it (what could not be explained by "the agent did X because it wanted to"?).
Let's instead make an attempt to explain. A complete control of an agent's own code, in the strict sense, is in contradiction of Gödel's incompleteness theorem. Furthermore, information-theoretic considerations significantly limit the degree to which an agent can control its own code (I'm wondering if anyone has ever done the math. I expect not. I intend to look further into this). In information-theoretic terminology, the agent will be limited to typical manipulations of its own code, which will be a strict (and presumably very small) subset of all possible manipulations.
Can an agent be made more effective than humans in manipulating its own code? I have very little doubt that it can. Can it lead to agents qualitatively more intelligent than humans? Again, I believe so. But I don't see a reason to believe that the code-rewriting ability itself can be qualitatively different than a human's, only quantitatively so (although of course the engineering details can be much different; I'm referring to the algorithmic level here).
As you've probably figured out, I'm new here. I encountered this post while reading the sequences. Although I'm somewhat learned on the subject, I haven't yet reached the part (which I trust exists) where GAI is discussed here.
On my path there, I'm actively trying to avoid a certain degree of group thinking which I detect in some of the comments here. Please take no offense, but it's phrases like the above quote which worry me: is there really a consensus around here about such profound questions? Hopefully it's only the terminology which is agreed upon, in which case I will learn it in time. But please, let's make our terminology "pay rent".
Sure, but we could imagine an AI deciding something like "I do not want to enjoy frozen yogurt", and then altering its code in such a way that it is no longer appropriate to describe it as enjoying frozen yogurt, yeah?
This seems trivially false - if an AI is instantiated as a bunch of zeros and ones in some substrate, how could Godel or similar concerns stop it from altering any subset of those bits?
You see reasons to believe that any artificial intelligence is limited to altering its motivations and desires in a way that is qualitatively similar to humans? This seems like a pretty extreme claim - what are the salient features of human self-rewriting that you think must be preserved?
I don't think it's a "consensus" so much as an assumed consensus for the sake of argument. Some do believe that any hypothetical AI's influence is practically unlimited, some agree to assume that because it's not ruled out and is a worst-case scenario or an interesting case (see wedrifid's comment on the grandparent (aside: not sure how unusual or nonobvious this is, but we often use familial relationships to describe the relative positions of comments, e.g. the comment I am responding to is the "parent" of this comment, the one you were responding to when you wrote it is the "grandparent". I think that's about as far as most users take the metaphor, though.)).
Thanks for challenging my position. This discussion is very stimulating for me!
I'm actually having trouble imagining this without anthropomorphizing (or at least zoomorphizing) the agent. When is it appropriate to describe an artificial agent as enjoying something? Surely not when it secretes serotonin into its bloodstream and synapses?
It's not a question of stopping it. Gödel is not giving it a stern look, saying: "you can't alter your own code until you've done your homework". It's more that these considerations prevent the agent from being in a state where it will, in fact, alter its own code in certain ways. This claim can and should be proved mathematically, but I don't have the resources to do that at the moment. In the meanwhile, I'd agree if you wanted to disagree.
I believe that this is likely, yes. The "salient feature" is being subject to the laws of nature, which in turn seem to be consistent with particular theories of logic and probability. The problem with such a claim is that these theories are still not fully understood.
When is it appropriate to describe a natural agent as enjoying something?
I'd like to understand what you're saying here better. An agent instantiated as a binary program can do any of the following:
Rewrite its own source code with a random binary string.
Do things until it encounters a different agent, obtain its source code, and replace its own source code with that.
It seems to me that either of these would be enough to provide "complete control" over the agent's source code in the sense that any possible program can be obtained as a result. So you must mean something different. What is it?
It's a pleasure!
Yeah, that was sloppy of me. Leaving aside the question of when something is enjoying something, let's take a more straightforward example: Suppose an AI were to design and implement more efficient algorithms for processing sensory stimuli? Or add a "face recognition" module when it determines that this would be useful for interacting with humans?
Hm. It seems that you should be able to write a simple program that overwrites its own code with an arbitrary value. Wouldn't that be a counterexample?
This sounds unjustifiably broad. Certainly, human behavior is subject to these restrictions, but it is also subject to much more stringent ones - we are not able to do everything that is logically possible. Do we agree, then, that humans and artificial agents are both subject to laws forbidding logical contradictions and the like, but that artificial agents are not in principle necessarily bound by the same additional restrictions as humans?
I wouldn't assert that. I thought I was stating the obvious.
See CuSithBell's reply.
Yes, I think I misspoke earlier, sorry. It was only "new information" in the sense that it wasn't in that particular sentence of Eliezer's - to anyone familiar with discussions of GAI, your assertion certainly should be obvious.
Ahh. That's where the "new information" thing came in to it. I didn't think I'd said anything about new so I'd wondered.