My interpretation of this interaction (which is fascinating to read, btw, because both of you are eloquently defending a cogent and interesting theory as far as I can tell) is that you've indirectly proposed Robot-1 as the initial model of an agent (which is clearly not a full model of a person and fails to capture many features of humans) in the first of a series of articles. I think Richard is objecting to the connections he presumes that you will eventually draw between Robot-1 and actual humans, and you're getting confused because you're just trying to talk about the thing you actually said, not the eventual conclusions he expects you to draw from your example.
If he's expecting you to verbally zig when you're actually planning to zag and you don't notice that he's trying to head you off at a pass you're not even heading towards, its entirely reasonable for you to be confused by what he's saying. (And if some of the audience also thinks you're going to zig they'll also see the theory he's arguing against, and see that his arguments against "your predicted eventual conclusions" are valid, and upvote his criticism of something you haven't yet said. And both of you are quite thoughtful and polite and educated so its good reading even if there is some confusion mixed into the back and forth.)
The place I think you were ambiguous enough to be misinterpreted was roughly here:
Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects.
You use the phrase "human level intelligence" and talk about the robot making the same fuzzy inferential leap that outside human observer's might make. Also, this is remarkably close to how some humans with very poor impulse control actually seem to function, modulo some different reflexes and an moderately unreasonable belief in their own deliberative agency (a la Blindsight with the "Jubyr fcrpvrf vf ntabfvp ol qrsnhyg" line and so on).
If you had said up front that you're using this as a toy model which has (for example) too few layers and no feedback from the "meta-observer" module to be a honestly plausible model of "properly functioning cohesively agentive mammals" I think Richard would not have made the mistake that I think he's making about what you're about to say. He keeps talking about a robust and vastly more complex model than Robot-1 (that being a multi-layer purposive control system) and talking about how not just hypothetical PCT algorithms but actual humans function and you haven't directly answered these concerns by saying clearly "I am not talking about humans yet, I'm just building conceptual vocabulary by showing how something clearly simpler might function to illustrate mechanistic thinking about mental processes".
It might have helped if you were clear about the possibility that Robot-1 would emit words more like we might expect someone to emit several years after a serious brain lesion that severed some vital connections in their brain, after they're verbal reasoning systems had updated on the lack of a functional connection between their conscious/verbal brain parts and their deeper body control systems. Like Robot-1 seems likely to me to end up saying something like "Watch out, I'm not just having a mental breakdown but I've never had any control over my body+brainstems's actions in the first place! I have no volitional control over my behavior! If you're wearing blue then take off the shirt or run away before I happen to turn around and see you and my reflex kicks in and my body tries to kill you. Dear god this sucks! Oh how I wish my mental architecture wasn't so broken..."
For what its worth, I think the Robot-1 example is conceptually useful and I'm really looking forward to seeing how the whole sequence plays out :-)
Imagine a robot with a turret-mounted camera and laser. Each moment, it is programmed to move forward a certain distance and perform a sweep with its camera. As it sweeps, the robot continuously analyzes the average RGB value of the pixels in the camera image; if the blue component passes a certain threshold, the robot stops, fires its laser at the part of the world corresponding to the blue area in the camera image, and then continues on its way.
Watching the robot's behavior, we would conclude that this is a robot that destroys blue objects. Maybe it is a surgical robot that destroys cancer cells marked by a blue dye; maybe it was built by the Department of Homeland Security to fight a group of terrorists who wear blue uniforms. Whatever. The point is that we would analyze this robot in terms of its goals, and in those terms we would be tempted to call this robot a blue-minimizer: a machine that exists solely to reduce the amount of blue objects in the world.
Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects.
But now stick the robot in a room with a hologram projector. The hologram projector (which is itself gray) projects a hologram of a blue object five meters in front of it. The robot's camera detects the projector, but its RGB value is harmless and the robot does not fire. Then the robot's camera detects the blue hologram and zaps it. We arrange for the robot to enter this room several times, and each time it ignores the projector and zaps the hologram, without effect.
Here the robot is failing at its goal of being a blue-minimizer. The right way to reduce the amount of blue in the universe is to destroy the projector; instead its beams flit harmlessly through the hologram.
Again, give the robot human level intelligence. Teach it exactly what a hologram projector is and how it works. Now what happens? Exactly the same thing - the robot executes its code, which says to scan the room until its camera registers blue, then shoot its laser.
In fact, there are many ways to subvert this robot. What if we put a lens over its camera which inverts the image, so that white appears as black, red as green, blue as yellow, and so on? The robot will not shoot us with its laser to prevent such a violation (unless we happen to be wearing blue clothes when we approach) - its entire program was detailed in the first paragraph, and there's nothing about resisting lens alterations. Nor will the robot correct itself and shoot only at objects that appear yellow - its entire program was detailed in the first paragraph, and there's nothing about correcting its program for new lenses. The robot will continue to zap objects that register a blue RGB value; but now it'll be shooting at anything that is yellow.
The human-level intelligence version of the robot will notice its vision has been inverted. It will know it is shooting yellow objects. It will know it is failing at its original goal of blue-minimization. And maybe if it had previously decided it was on a holy quest to rid the world of blue, it will be deeply horrified and ashamed of its actions. It will wonder why it has suddenly started to deviate from this quest, and why it just can't work up the will to destroy blue objects anymore.
The robot goes to Quirinus Quirrell, who explains that robots don't really care about minimizing the color blue. They only care about status and power, and pretend to care about minimizing blue in order to impress potential allies.
The robot goes to Robin Hanson, who explains that there are really multiple agents within the robot. One of them wants to minimize the color blue, the other wants to minimize the color yellow. Maybe the two of them can make peace, and agree to minimize yellow one day and blue the next?
The robot goes to Anna Salamon, who explains that robots are not automatically strategic, and that if it wants to achieve its goal it will have to learn special techniques to keep focus on it.
I think all of these explanations hold part of the puzzle, but that the most fundamental explanation is that the mistake began as soon as we started calling it a "blue-minimizing robot". This is not because its utility function doesn't exactly correspond to blue-minimization: even if we try to assign it a ponderous function like "minimize the color represented as blue within your current visual system, except in the case of holograms" it will be a case of overfitting a curve. The robot is not maximizing or minimizing anything. It does exactly what it says in its program: find something that appears blue and shoot it with a laser. If its human handlers (or itself) want to interpret that as goal directed behavior, well, that's their problem.
It may be that the robot was created to achieve a specific goal. It may be that the Department of Homeland Security programmed it to attack blue-uniformed terrorists who had no access to hologram projectors or inversion lenses. But to assign the goal of "blue minimization" to the robot is a confusion of levels: this was a goal of the Department of Homeland Security, which became a lost purpose as soon as it was represented in the form of code.
The robot is a behavior-executor, not a utility-maximizer.
In the rest of this sequence, I want to expand upon this idea. I'll start by discussing some of the foundations of behaviorism, one of the earliest theories to treat people as behavior-executors. I'll go into some of the implications for the "easy problem" of consciousness and philosophy of mind. I'll very briefly discuss the philosophical debate around eliminativism and a few eliminativist schools. Then I'll go into why we feel like we have goals and preferences and what to do about them.