The entire example is deeply misleading. We model the robot as a fairly stupid blue minimizer because this seems to be a good succinct description of the robots entire externally observable behavior and would cease to do so if it also had a speaker or display window with which it communicated it's internal reflections.
So to retain the intuitive appeal of describing the robot as a blue minimizer the robots human level intelligence must be walled off inside the robot unable to effectively signal to the outside world. But so long as the human level intelligence is irrelevant to predicting the robots exterior behavior the blue minimizing model is an appropriate one to keep in mind to guide our interactions with the robot. That is like any good scientific model it provides good predictive power relative to it's cost in mental (or simulating computer's) effort and memory.
It's pretty obvious why it's useful to us to describe stuff in ways that lets us feasibly predict/approximate the behavior of external entities/effects we encounter. Perhaps though you are puzzled or arguing against the idea that belief-desire style models are often a good tradeoff between accuracy and ease of use. However, this is also easily explained as a result of evolutionary hard wiring that effectively functions as a hardware accelerator for belief desire models so we didn't get eaten and the salience of objects designed by other humans to our lives. The acceleration means that even in domains were the fit is poor (like charges want to get away from each other) the ease of application still makes them a useful heuristic. Also since human made objects are usually built to achieve a particular goal that had to be represented usefully in the builders mind these objects usually offer the most effective behavior for accomplishing that goal relative to a given level of computational complexity.
In other words since the guy who builds the robot to lase the blue dyed cancer cells does so by coming up with a goal he wants the robot to achieve (discriminating between blue cells and other blue things is hard so we will just build a robot to fry all the blue things it sees) and then offering up the best implementation he can come up with given the constraints the resulting behavior can be well modeled as the object desiring some end but being stupid in various ways. In other words if you want to zap blue cells you don't add extra code to 1 time in a million zap yellow nor would you tack on AI without needing it's expertise to implement the desired behavior so the resulting behavior looks like a stupid creature trying to achieve the inventors chosen goal.
Interstingly I suspect that being well described by a belief-desire model probably simply corresponds to being on the set of non-dominated ways to achieve a goal people can reasonably conceptualize. Thus we see it all the time in evolution as we can easily understand both the species level goal of survival and individual level goals of avoiding suffering and satisfying some basic wants and natural selection ensures that the implementations we usually see are at least locally non-dominated (if you want to make a better hunter on the savannah than the lion you have to either jump to a whole new basic design or use a bigger computational/energy budget.)
Imagine a robot with a turret-mounted camera and laser. Each moment, it is programmed to move forward a certain distance and perform a sweep with its camera. As it sweeps, the robot continuously analyzes the average RGB value of the pixels in the camera image; if the blue component passes a certain threshold, the robot stops, fires its laser at the part of the world corresponding to the blue area in the camera image, and then continues on its way.
Watching the robot's behavior, we would conclude that this is a robot that destroys blue objects. Maybe it is a surgical robot that destroys cancer cells marked by a blue dye; maybe it was built by the Department of Homeland Security to fight a group of terrorists who wear blue uniforms. Whatever. The point is that we would analyze this robot in terms of its goals, and in those terms we would be tempted to call this robot a blue-minimizer: a machine that exists solely to reduce the amount of blue objects in the world.
Suppose the robot had human level intelligence in some side module, but no access to its own source code; that it could learn about itself only through observing its own actions. The robot might come to the same conclusions we did: that it is a blue-minimizer, set upon a holy quest to rid the world of the scourge of blue objects.
But now stick the robot in a room with a hologram projector. The hologram projector (which is itself gray) projects a hologram of a blue object five meters in front of it. The robot's camera detects the projector, but its RGB value is harmless and the robot does not fire. Then the robot's camera detects the blue hologram and zaps it. We arrange for the robot to enter this room several times, and each time it ignores the projector and zaps the hologram, without effect.
Here the robot is failing at its goal of being a blue-minimizer. The right way to reduce the amount of blue in the universe is to destroy the projector; instead its beams flit harmlessly through the hologram.
Again, give the robot human level intelligence. Teach it exactly what a hologram projector is and how it works. Now what happens? Exactly the same thing - the robot executes its code, which says to scan the room until its camera registers blue, then shoot its laser.
In fact, there are many ways to subvert this robot. What if we put a lens over its camera which inverts the image, so that white appears as black, red as green, blue as yellow, and so on? The robot will not shoot us with its laser to prevent such a violation (unless we happen to be wearing blue clothes when we approach) - its entire program was detailed in the first paragraph, and there's nothing about resisting lens alterations. Nor will the robot correct itself and shoot only at objects that appear yellow - its entire program was detailed in the first paragraph, and there's nothing about correcting its program for new lenses. The robot will continue to zap objects that register a blue RGB value; but now it'll be shooting at anything that is yellow.
The human-level intelligence version of the robot will notice its vision has been inverted. It will know it is shooting yellow objects. It will know it is failing at its original goal of blue-minimization. And maybe if it had previously decided it was on a holy quest to rid the world of blue, it will be deeply horrified and ashamed of its actions. It will wonder why it has suddenly started to deviate from this quest, and why it just can't work up the will to destroy blue objects anymore.
The robot goes to Quirinus Quirrell, who explains that robots don't really care about minimizing the color blue. They only care about status and power, and pretend to care about minimizing blue in order to impress potential allies.
The robot goes to Robin Hanson, who explains that there are really multiple agents within the robot. One of them wants to minimize the color blue, the other wants to minimize the color yellow. Maybe the two of them can make peace, and agree to minimize yellow one day and blue the next?
The robot goes to Anna Salamon, who explains that robots are not automatically strategic, and that if it wants to achieve its goal it will have to learn special techniques to keep focus on it.
I think all of these explanations hold part of the puzzle, but that the most fundamental explanation is that the mistake began as soon as we started calling it a "blue-minimizing robot". This is not because its utility function doesn't exactly correspond to blue-minimization: even if we try to assign it a ponderous function like "minimize the color represented as blue within your current visual system, except in the case of holograms" it will be a case of overfitting a curve. The robot is not maximizing or minimizing anything. It does exactly what it says in its program: find something that appears blue and shoot it with a laser. If its human handlers (or itself) want to interpret that as goal directed behavior, well, that's their problem.
It may be that the robot was created to achieve a specific goal. It may be that the Department of Homeland Security programmed it to attack blue-uniformed terrorists who had no access to hologram projectors or inversion lenses. But to assign the goal of "blue minimization" to the robot is a confusion of levels: this was a goal of the Department of Homeland Security, which became a lost purpose as soon as it was represented in the form of code.
The robot is a behavior-executor, not a utility-maximizer.
In the rest of this sequence, I want to expand upon this idea. I'll start by discussing some of the foundations of behaviorism, one of the earliest theories to treat people as behavior-executors. I'll go into some of the implications for the "easy problem" of consciousness and philosophy of mind. I'll very briefly discuss the philosophical debate around eliminativism and a few eliminativist schools. Then I'll go into why we feel like we have goals and preferences and what to do about them.