Volndeau comments on The Blue-Minimizing Robot - Less Wrong

162 Post author: Yvain 04 July 2011 10:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread.

Comment author: Volndeau 05 July 2011 09:00:51PM *  1 point [-]

EDIT: It just clicked after finishing my thought.

If its human handlers (or itself) want to interpret that as goal directed behavior, well, that's their problem.

I was thrown off by all the comments about the robot and its behavior. This is more about the comparison of behavior-executor vs. utility-maximizer, not the robot. EDIT:

Perhaps I am missing the final direction of this conversation, but I think the intelligence involved in the example has mapped the terrain and failing to update the map once it has been seen to not be correct.

Watching the robot's behavior, we would conclude that this is a robot that destroys blue objects.

Correct, the robot is designed to shoot its laser at blue things.

Here the robot is failing at its goal of being a blue-minimizer.

The robot is failing at nothing. It is doing exactly as programmed. The laser, on the other hand, is failing to eliminate the blue object, as what is expected when the laser is shot at something. Now that this experiment has been conducted and the map has been found to be wrong, correct the map. It is no longer a Blue-Minimizing Robot, it is a Blue-Targeting-and-Shooting Robot. The result of the laser shot is variable.

The right way to reduce the amount of blue in the universe is to destroy the projector; instead its beams flit harmlessly through the hologram.

The discussion has mapped it as a Blue-Minimizing Robot, it is not, as proven by this experiment. If this were to be made such, more programming would have to be implemented in-case of the laser not having the intended effect. Being there is not a way of altering the programming, there is no way of changing the terrain, so the map must be changed.