Volndeau comments on The Blue-Minimizing Robot - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (159)
EDIT: It just clicked after finishing my thought.
I was thrown off by all the comments about the robot and its behavior. This is more about the comparison of behavior-executor vs. utility-maximizer, not the robot. EDIT:
Perhaps I am missing the final direction of this conversation, but I think the intelligence involved in the example has mapped the terrain and failing to update the map once it has been seen to not be correct.
Correct, the robot is designed to shoot its laser at blue things.
The robot is failing at nothing. It is doing exactly as programmed. The laser, on the other hand, is failing to eliminate the blue object, as what is expected when the laser is shot at something. Now that this experiment has been conducted and the map has been found to be wrong, correct the map. It is no longer a Blue-Minimizing Robot, it is a Blue-Targeting-and-Shooting Robot. The result of the laser shot is variable.
The discussion has mapped it as a Blue-Minimizing Robot, it is not, as proven by this experiment. If this were to be made such, more programming would have to be implemented in-case of the laser not having the intended effect. Being there is not a way of altering the programming, there is no way of changing the terrain, so the map must be changed.